README.md 1.71 KB
Newer Older
Rick Ho's avatar
Rick Ho committed
1
2
Fast MoE
===
Zhilin Yang's avatar
init  
Zhilin Yang committed
3

Rick Ho's avatar
Rick Ho committed
4
5
## Introduction

Rick Ho's avatar
Rick Ho committed
6
An easy-to-use but efficient implementation of the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
7
8
9
10
model for PyTorch. 

## Installation

11
12
13
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
Rick Ho's avatar
Rick Ho committed
14
15
v1.6.0 and CUDA 10, with designed compatibility to other versions.

16
If distributed version is enabled, NCCL with P2P communication support,
Rick Ho's avatar
Rick Ho committed
17
typically versions >= 2.7.5 is needed. 
18
19
20

### Installing

Rick Ho's avatar
Rick Ho committed
21
22
23
24
25
26
Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use `python setup.py install` to easily install and enjoy
using Fast MoE for training.

## Usage 

Rick Ho's avatar
fmoefy  
Rick Ho committed
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
### FMoEfy a transformer model

Transformer is currently the most popular model to be extended by MoE. Using
Fast MoE, a transformer-based model can be extended as MoE by an one-key plugin
shown as follow.

Assume that there is a PyTorch model `model` with MLP layers located at
`model.language_model.transformer.layers[<idx>].mlp`, use the following two
lines to easily scale up the MLP layers to multiple experts.

```python
from fmoe.megatron import fmoefy
model = fmoefy(model, num_experts=<number of experts per worker>)
```

Rick Ho's avatar
Rick Ho committed
42
43
44
45
46
47
48
49
50
51
### Using Fast MoE as a PyTorch module

Examples can be seen in [examples](examples/). The easist way is to replace the
feed forward layer by the `FMoE` layer.

### Using Fast MoE in Parallel

For data parallel, nothing else is needed.

For expert parallel, in which experts are located separately across workers,
Rick Ho's avatar
Rick Ho committed
52
53
54
55
NCCL backend is required to be built with PyTorch. Use environment variable
`USE_NCCL=1` to `setup.py` to enable distributing experts across workers. Note
that the arguments of the MoE layers should then be excluded from the data
parallel parameter synchronization list.
Rick Ho's avatar
fmoefy  
Rick Ho committed
56
E