README.md 3.83 KB
Newer Older
Jiezhong Qiu's avatar
Jiezhong Qiu committed
1
FastMoE
Rick Ho's avatar
Rick Ho committed
2
===
Zhilin Yang's avatar
init  
Zhilin Yang committed
3

Rick Ho's avatar
Rick Ho committed
4
5
6
[Release note](doc/release-note.md)
| [中文文档](doc/readme-cn.md)
| [Slack workspace](https://join.slack.com/t/fastmoe/shared_invite/zt-mz0ai6ol-ggov75D62YsgHfzShw8KYw)
Rick Ho's avatar
Rick Ho committed
7

Rick Ho's avatar
Rick Ho committed
8
9
## Introduction

Rick Ho's avatar
Rick Ho committed
10
An easy-to-use but efficient implementation of the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
11
12
13
14
model for PyTorch. 

## Installation

15
16
17
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
Rick Ho's avatar
Rick Ho committed
18
v1.8.0 and CUDA 10, with designed compatibility to older versions.
Rick Ho's avatar
Rick Ho committed
19

Rick Ho's avatar
Rick Ho committed
20
21
If the distributed expert feature is enabled, NCCL with P2P communication
support, typically versions `>=2.7.5`, is needed. 
22
23
24

### Installing

Jiezhong Qiu's avatar
Jiezhong Qiu committed
25
FastMoE contains a set of PyTorch customized opearators, including both C and
Rick Ho's avatar
Rick Ho committed
26
Python components. Use `python setup.py install` to easily install and enjoy
Jiezhong Qiu's avatar
Jiezhong Qiu committed
27
using FastMoE for training.
Rick Ho's avatar
Rick Ho committed
28

Rick Ho's avatar
Rick Ho committed
29
The distributed expert feature is disabled by default. If you want to enable
30
31
32
33
it, pass environment variable `USE_NCCL=1` to the setup script.

Note that an extra NCCL developer package is needed, which has to be consistant
with your PyTorch's NCCL version, which can be inspected by running
Rick Ho's avatar
Rick Ho committed
34
35
`torch.cuda.nccl.version()`. The 
[official PyTorch docker image](https://hub.docker.com/r/pytorch/pytorch) is
36
37
38
39
recommended, as the environment is well-setup there. Otherwise, you can access
the [download link of all NCCL
versions](https://developer.nvidia.com/nccl/nccl-legacy-downloads) to download
the NCCL package that is suitable for you.
Rick Ho's avatar
Rick Ho committed
40

Rick Ho's avatar
Rick Ho committed
41
42
## Usage 

Jiezhong Qiu's avatar
Jiezhong Qiu committed
43
### FMoEfy a Transformer model
Rick Ho's avatar
fmoefy  
Rick Ho committed
44

Jiezhong Qiu's avatar
Jiezhong Qiu committed
45
46
Transformer is currently one of the most popular models to be extended by MoE. Using
FastMoE, a Transformer-based model can be extended as MoE by an one-key plugin
Rick Ho's avatar
fmoefy  
Rick Ho committed
47
48
shown as follow.

Rick Ho's avatar
Rick Ho committed
49
50
51
For example, when using [Megatron-LM](https://github.com/nvidia/megatron-lm),
using the following lines can help you easily scale up the MLP layers to
multiple experts.
Rick Ho's avatar
fmoefy  
Rick Ho committed
52
53

```python
Rick Ho's avatar
Rick Ho committed
54
55
model = ...

Rick Ho's avatar
fmoefy  
Rick Ho committed
56
57
from fmoe.megatron import fmoefy
model = fmoefy(model, num_experts=<number of experts per worker>)
Rick Ho's avatar
Rick Ho committed
58
59

train(model, ...)
Rick Ho's avatar
fmoefy  
Rick Ho committed
60
61
```

Rick Ho's avatar
Rick Ho committed
62
63
64
A detailed tutorial to _moefy_ Megatron-LM can be found
[here](examples/megatron).

Jiezhong Qiu's avatar
Jiezhong Qiu committed
65
### Using FastMoE as a PyTorch module
Rick Ho's avatar
Rick Ho committed
66

Rick Ho's avatar
Rick Ho committed
67
68
69
An example MoE transformer model can be seen in the
[Transformer-XL](examples/transformer-xl) example. The easist way is to replace
the MLP layer by the `FMoE` layers.
Rick Ho's avatar
Rick Ho committed
70

Jiezhong Qiu's avatar
Jiezhong Qiu committed
71
### Using FastMoE in Parallel
Rick Ho's avatar
Rick Ho committed
72

Jiezhong Qiu's avatar
Jiezhong Qiu committed
73
FastMoE supports both data parallel and model parallel. 
Rick Ho's avatar
Rick Ho committed
74

Rick Ho's avatar
Rick Ho committed
75
#### Data Parallel
Jiezhong Qiu's avatar
Jiezhong Qiu committed
76
77
78
79

In FastMoE's data parallel mode, both the gate and the experts are replicated on each worker. 
The following figure shows the forward pass of a 3-expert MoE with 2-way data parallel.

80
81
82
<p align="center">
<img src="doc/fastmoe_data_parallel.png" width="600">
</p>
Jiezhong Qiu's avatar
Jiezhong Qiu committed
83
84
85
86

For data parallel, no extra coding is needed. FastMoE works seamlessly with PyTorch's `DataParallel` or `DistributedDataParallel`.
The only drawback of data parallel is that the number of experts is constrained by each worker's memory.

Rick Ho's avatar
Rick Ho committed
87
#### Model Parallel
Jiezhong Qiu's avatar
Jiezhong Qiu committed
88
89
90
91
92
93
94

In FastMoE's model parallel mode, the gate network is still replicated on each worker but
experts are placed separately across workers.
Thus, by introducing additional communication cost, FastMoE enjoys a large expert pool whose size is proportional to the number of workers.

The following figure shows the forward pass of a 6-expert MoE with 2-way model parallel. Note that experts 1-3 are located in worker 1 while experts 4-6 are located in worker 2.

95
96
97
<p align="center">
<img src="doc/fastmoe_model_parallel.png" width="600">
</p>
Jiezhong Qiu's avatar
Jiezhong Qiu committed
98
99

FastMoE's model parallel requires sophiscated parallel strategies that neither PyTorch nor
Rick Ho's avatar
Rick Ho committed
100
101
Megatron-LM provides. The `fmoe.DistributedGroupedDataParallel` module is
introduced to replace PyTorch's DDP module.
Jiezhong Qiu's avatar
Jiezhong Qiu committed
102

Rick Ho's avatar
Rick Ho committed
103
## Troubleshootings / Discussion
Jiezhong Qiu's avatar
Jiezhong Qiu committed
104

Rick Ho's avatar
Rick Ho committed
105
If you have any problem using FastMoE, or you are interested in getting involved in developing FastMoE, feel free to join the [our slack channel](https://join.slack.com/t/fastmoe/shared_invite/zt-mz0ai6ol-ggov75D62YsgHfzShw8KYw).