README.md 3.36 KB
Newer Older
Jiezhong Qiu's avatar
Jiezhong Qiu committed
1
FastMoE
Rick Ho's avatar
Rick Ho committed
2
===
Zhilin Yang's avatar
init  
Zhilin Yang committed
3

Rick Ho's avatar
Rick Ho committed
4
5
## Introduction

Rick Ho's avatar
Rick Ho committed
6
An easy-to-use but efficient implementation of the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
7
8
9
10
model for PyTorch. 

## Installation

11
12
13
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
Rick Ho's avatar
Rick Ho committed
14
v1.8.0 and CUDA 10, with designed compatibility to older versions.
Rick Ho's avatar
Rick Ho committed
15

Rick Ho's avatar
Rick Ho committed
16
17
If the distributed expert feature is enabled, NCCL with P2P communication
support, typically versions `>=2.7.5`, is needed. 
18
19
20

### Installing

Jiezhong Qiu's avatar
Jiezhong Qiu committed
21
FastMoE contains a set of PyTorch customized opearators, including both C and
Rick Ho's avatar
Rick Ho committed
22
Python components. Use `python setup.py install` to easily install and enjoy
Jiezhong Qiu's avatar
Jiezhong Qiu committed
23
using FastMoE for training.
Rick Ho's avatar
Rick Ho committed
24

25
26
27
28
29
30
31
32
33
34
The distributed expert feature is disabled by default. If you want to disable
it, pass environment variable `USE_NCCL=1` to the setup script.

Note that an extra NCCL developer package is needed, which has to be consistant
with your PyTorch's NCCL version, which can be inspected by running
`torch.cuda.nccl.version()`. The [official PyTorch docker image]() is
recommended, as the environment is well-setup there. Otherwise, you can access
the [download link of all NCCL
versions](https://developer.nvidia.com/nccl/nccl-legacy-downloads) to download
the NCCL package that is suitable for you.
Rick Ho's avatar
Rick Ho committed
35

Rick Ho's avatar
Rick Ho committed
36
37
## Usage 

Jiezhong Qiu's avatar
Jiezhong Qiu committed
38
### FMoEfy a Transformer model
Rick Ho's avatar
fmoefy  
Rick Ho committed
39

Jiezhong Qiu's avatar
Jiezhong Qiu committed
40
41
Transformer is currently one of the most popular models to be extended by MoE. Using
FastMoE, a Transformer-based model can be extended as MoE by an one-key plugin
Rick Ho's avatar
fmoefy  
Rick Ho committed
42
43
shown as follow.

Rick Ho's avatar
Rick Ho committed
44
45
46
For example, when using [Megatron-LM](https://github.com/nvidia/megatron-lm),
using the following lines can help you easily scale up the MLP layers to
multiple experts.
Rick Ho's avatar
fmoefy  
Rick Ho committed
47
48

```python
Rick Ho's avatar
Rick Ho committed
49
50
model = ...

Rick Ho's avatar
fmoefy  
Rick Ho committed
51
52
from fmoe.megatron import fmoefy
model = fmoefy(model, num_experts=<number of experts per worker>)
Rick Ho's avatar
Rick Ho committed
53
54

train(model, ...)
Rick Ho's avatar
fmoefy  
Rick Ho committed
55
56
```

Rick Ho's avatar
Rick Ho committed
57
58
59
A detailed tutorial to _moefy_ Megatron-LM can be found
[here](examples/megatron).

Jiezhong Qiu's avatar
Jiezhong Qiu committed
60
### Using FastMoE as a PyTorch module
Rick Ho's avatar
Rick Ho committed
61

Rick Ho's avatar
Rick Ho committed
62
63
64
An example MoE transformer model can be seen in the
[Transformer-XL](examples/transformer-xl) example. The easist way is to replace
the MLP layer by the `FMoE` layers.
Rick Ho's avatar
Rick Ho committed
65

Jiezhong Qiu's avatar
Jiezhong Qiu committed
66
### Using FastMoE in Parallel
Rick Ho's avatar
Rick Ho committed
67

Jiezhong Qiu's avatar
Jiezhong Qiu committed
68
FastMoE supports both data parallel and model parallel. 
Rick Ho's avatar
Rick Ho committed
69

Jiezhong Qiu's avatar
Jiezhong Qiu committed
70
71
72
73
74
### Data Parallel

In FastMoE's data parallel mode, both the gate and the experts are replicated on each worker. 
The following figure shows the forward pass of a 3-expert MoE with 2-way data parallel.

75
76
77
<p align="center">
<img src="doc/fastmoe_data_parallel.png" width="600">
</p>
Jiezhong Qiu's avatar
Jiezhong Qiu committed
78
79
80
81
82
83
84
85
86
87
88
89

For data parallel, no extra coding is needed. FastMoE works seamlessly with PyTorch's `DataParallel` or `DistributedDataParallel`.
The only drawback of data parallel is that the number of experts is constrained by each worker's memory.

### Model Parallel

In FastMoE's model parallel mode, the gate network is still replicated on each worker but
experts are placed separately across workers.
Thus, by introducing additional communication cost, FastMoE enjoys a large expert pool whose size is proportional to the number of workers.

The following figure shows the forward pass of a 6-expert MoE with 2-way model parallel. Note that experts 1-3 are located in worker 1 while experts 4-6 are located in worker 2.

90
91
92
<p align="center">
<img src="doc/fastmoe_model_parallel.png" width="600">
</p>
Jiezhong Qiu's avatar
Jiezhong Qiu committed
93
94

FastMoE's model parallel requires sophiscated parallel strategies that neither PyTorch nor
Rick Ho's avatar
Rick Ho committed
95
96
Megatron-LM provides. The `fmoe.DistributedGroupedDataParallel` module is
introduced to replace PyTorch's DDP module.
Jiezhong Qiu's avatar
Jiezhong Qiu committed
97
98