README.md 5.97 KB
Newer Older
Rick Ho's avatar
Rick Ho committed
1
<img height='60px' src='doc/logo/rect.png'/>
Zhilin Yang's avatar
init  
Zhilin Yang committed
2

Rick Ho's avatar
Rick Ho committed
3
4
5
[Release note](doc/release-note.md)
| [中文文档](doc/readme-cn.md)
| [Slack workspace](https://join.slack.com/t/fastmoe/shared_invite/zt-mz0ai6ol-ggov75D62YsgHfzShw8KYw)
Rick Ho's avatar
Rick Ho committed
6

Rick Ho's avatar
Rick Ho committed
7
8
## Introduction

Rick Ho's avatar
Rick Ho committed
9
An easy-to-use and efficient system to support the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
10
11
12
13
model for PyTorch. 

## Installation

14
15
16
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
17
18
19
20
21
v1.10.0 and CUDA 11.3, with designed compatibility to older and newer versions.

The minimum version of supported PyTorch is `1.7.2` with CUDA `10`. However,
there are a few known issues that requires manual modification of FastMoE's
code with specific older dependents.
Rick Ho's avatar
Rick Ho committed
22

Rick Ho's avatar
Rick Ho committed
23
24
If the distributed expert feature is enabled, NCCL with P2P communication
support, typically versions `>=2.7.5`, is needed. 
25
26
27

### Installing

Jiezhong Qiu's avatar
Jiezhong Qiu committed
28
FastMoE contains a set of PyTorch customized opearators, including both C and
Rick Ho's avatar
Rick Ho committed
29
Python components. Use `python setup.py install` to easily install and enjoy
Jiezhong Qiu's avatar
Jiezhong Qiu committed
30
using FastMoE for training.
Rick Ho's avatar
Rick Ho committed
31

32
33
A step-by-step tutorial for the installation procedure can be found [here](doc/installation-guide.md).

heheda's avatar
heheda committed
34
35
The distributed expert feature is enabled by default. If you want to disable
it, pass environment variable `USE_NCCL=0` to the setup script.
36

Yimin Jiang's avatar
Yimin Jiang committed
37
Note that an extra NCCL developer package is needed, which has to be consistent
38
with your PyTorch's NCCL version, which can be inspected by running
Rick Ho's avatar
Rick Ho committed
39
40
`torch.cuda.nccl.version()`. The 
[official PyTorch docker image](https://hub.docker.com/r/pytorch/pytorch) is
41
42
43
44
recommended, as the environment is well-setup there. Otherwise, you can access
the [download link of all NCCL
versions](https://developer.nvidia.com/nccl/nccl-legacy-downloads) to download
the NCCL package that is suitable for you.
Rick Ho's avatar
Rick Ho committed
45

Rick Ho's avatar
Rick Ho committed
46
47
## Usage 

Jiezhong Qiu's avatar
Jiezhong Qiu committed
48
### FMoEfy a Transformer model
Rick Ho's avatar
fmoefy  
Rick Ho committed
49

Jiezhong Qiu's avatar
Jiezhong Qiu committed
50
51
Transformer is currently one of the most popular models to be extended by MoE. Using
FastMoE, a Transformer-based model can be extended as MoE by an one-key plugin
Rick Ho's avatar
fmoefy  
Rick Ho committed
52
53
shown as follow.

Rick Ho's avatar
Rick Ho committed
54
55
56
For example, when using [Megatron-LM](https://github.com/nvidia/megatron-lm),
using the following lines can help you easily scale up the MLP layers to
multiple experts.
Rick Ho's avatar
fmoefy  
Rick Ho committed
57
58

```python
Rick Ho's avatar
Rick Ho committed
59
60
model = ...

Rick Ho's avatar
fmoefy  
Rick Ho committed
61
from fmoe.megatron import fmoefy
Jiezhong Qiu's avatar
Jiezhong Qiu committed
62
model = fmoefy(model, fmoe_num_experts=<number of experts per worker>)
Rick Ho's avatar
Rick Ho committed
63
64

train(model, ...)
Rick Ho's avatar
fmoefy  
Rick Ho committed
65
66
```

Rick Ho's avatar
Rick Ho committed
67
68
69
A detailed tutorial to _moefy_ Megatron-LM can be found
[here](examples/megatron).

Jiezhong Qiu's avatar
Jiezhong Qiu committed
70
### Using FastMoE as a PyTorch module
Rick Ho's avatar
Rick Ho committed
71

Rick Ho's avatar
Rick Ho committed
72
73
74
An example MoE transformer model can be seen in the
[Transformer-XL](examples/transformer-xl) example. The easist way is to replace
the MLP layer by the `FMoE` layers.
Rick Ho's avatar
Rick Ho committed
75

Jiezhong Qiu's avatar
Jiezhong Qiu committed
76
### Using FastMoE in Parallel
Rick Ho's avatar
Rick Ho committed
77

Rick Ho's avatar
Rick Ho committed
78
79
80
FastMoE supports multiple ways of parallel training. See [a comprehensive
document for parallelism](doc/parallelism) for details. Below shows the two
simplest ways of using FastMoE in parallel.
Rick Ho's avatar
Rick Ho committed
81

Rick Ho's avatar
Rick Ho committed
82
#### Data Parallel
Jiezhong Qiu's avatar
Jiezhong Qiu committed
83
84
85
86

In FastMoE's data parallel mode, both the gate and the experts are replicated on each worker. 
The following figure shows the forward pass of a 3-expert MoE with 2-way data parallel.

87
<p align="center">
Rick Ho's avatar
Rick Ho committed
88
<img src="doc/parallelism/fastmoe_data_parallel.png" width="600">
89
</p>
Jiezhong Qiu's avatar
Jiezhong Qiu committed
90
91
92
93

For data parallel, no extra coding is needed. FastMoE works seamlessly with PyTorch's `DataParallel` or `DistributedDataParallel`.
The only drawback of data parallel is that the number of experts is constrained by each worker's memory.

Rick Ho's avatar
Rick Ho committed
94
#### Expert Parallel (also called Model Parlallel in some previous versions)
Jiezhong Qiu's avatar
Jiezhong Qiu committed
95

Rick Ho's avatar
Rick Ho committed
96
In FastMoE's expert parallel mode, the gate network is still replicated on each worker but
Jiezhong Qiu's avatar
Jiezhong Qiu committed
97
98
99
100
101
experts are placed separately across workers.
Thus, by introducing additional communication cost, FastMoE enjoys a large expert pool whose size is proportional to the number of workers.

The following figure shows the forward pass of a 6-expert MoE with 2-way model parallel. Note that experts 1-3 are located in worker 1 while experts 4-6 are located in worker 2.

102
<p align="center">
Rick Ho's avatar
Rick Ho committed
103
<img src="doc/parallelism/fastmoe_expert_parallel.png" width="600">
104
</p>
Jiezhong Qiu's avatar
Jiezhong Qiu committed
105

Rick Ho's avatar
Rick Ho committed
106
107
108
109
FastMoE's expert parallel requires sophiscated parallel strategies that neither
PyTorch nor Megatron-LM provided when FastMoE was created. The
`fmoe.DistributedGroupedDataParallel` module is introduced to replace PyTorch's
DDP module.
Jiezhong Qiu's avatar
Jiezhong Qiu committed
110

Rick Ho's avatar
Rick Ho committed
111
112
113
114
115
116
117
118
119
120
#### Faster Performance Features

From a PPoPP'22 paper, _FasterMoE: modeling and optimizing training of
large-scale dynamic pre-trained models_, we have adopted techniques to make
FastMoE's model parallel much more efficient.

These optimizations are named as **Faster Performance Features**, and can be
enabled via several environment variables. Their usage and constraints are
detailed in [a separate document](doc/fastermoe).

Rick Ho's avatar
Rick Ho committed
121
122
## Citation

Rick Ho's avatar
Rick Ho committed
123
124
For the core FastMoE system.

Rick Ho's avatar
Rick Ho committed
125
```
Rick Ho's avatar
Rick Ho committed
126
@article{he2021fastmoe,
Rick Ho's avatar
Rick Ho committed
127
      title={FastMoE: A Fast Mixture-of-Expert Training System}, 
Rick Ho's avatar
Rick Ho committed
128
      author={Jiaao He and Jiezhong Qiu and Aohan Zeng and Zhilin Yang and Jidong Zhai and Jie Tang},
Rick Ho's avatar
Rick Ho committed
129
130
      journal={arXiv preprint arXiv:2103.13262},
      year={2021}
Rick Ho's avatar
Rick Ho committed
131
132
133
}
```

Rick Ho's avatar
Rick Ho committed
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
For the [faster performance features](doc/fastermoe).

```
@inproceedings{he2022fastermoe,
    author = {He, Jiaao and Zhai, Jidong and Antunes, Tiago and Wang, Haojie and Luo, Fuwen and Shi, Shangfeng and Li, Qin},
    title = {FasterMoE: Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models},
    year = {2022},
    isbn = {9781450392044},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3503221.3508418},
    doi = {10.1145/3503221.3508418},
    booktitle = {Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},
    pages = {120–134},
    numpages = {15},
    keywords = {parallelism, distributed deep learning, performance modeling},
    location = {Seoul, Republic of Korea},
    series = {PPoPP '22}
}
```

Rick Ho's avatar
Rick Ho committed
155
## Troubleshootings / Discussion
Jiezhong Qiu's avatar
Jiezhong Qiu committed
156

Rick Ho's avatar
Rick Ho committed
157
If you have any problem using FastMoE, or you are interested in getting involved in developing FastMoE, feel free to join [our slack channel](https://join.slack.com/t/fastmoe/shared_invite/zt-mz0ai6ol-ggov75D62YsgHfzShw8KYw).