README.md 1.92 KB
Newer Older
Rick Ho's avatar
Rick Ho committed
1
2
Fast MoE
===
Zhilin Yang's avatar
init  
Zhilin Yang committed
3

Rick Ho's avatar
Rick Ho committed
4
5
## Introduction

Rick Ho's avatar
Rick Ho committed
6
An easy-to-use but efficient implementation of the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
7
8
9
10
model for PyTorch. 

## Installation

11
12
13
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
Rick Ho's avatar
Rick Ho committed
14
v1.8.0 and CUDA 10, with designed compatibility to older versions.
Rick Ho's avatar
Rick Ho committed
15

Rick Ho's avatar
Rick Ho committed
16
17
If the distributed expert feature is enabled, NCCL with P2P communication
support, typically versions `>=2.7.5`, is needed. 
18
19
20

### Installing

Rick Ho's avatar
Rick Ho committed
21
22
23
24
Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use `python setup.py install` to easily install and enjoy
using Fast MoE for training.

Rick Ho's avatar
Rick Ho committed
25
26
27
The distributed expert feature is enabled by default. If you want to disable
it, pass environment variable `USE_NCCL=0` to the setup script.

Rick Ho's avatar
Rick Ho committed
28
29
## Usage 

Rick Ho's avatar
fmoefy  
Rick Ho committed
30
31
32
33
34
35
### FMoEfy a transformer model

Transformer is currently the most popular model to be extended by MoE. Using
Fast MoE, a transformer-based model can be extended as MoE by an one-key plugin
shown as follow.

Rick Ho's avatar
Rick Ho committed
36
37
38
For example, when using [Megatron-LM](https://github.com/nvidia/megatron-lm),
using the following lines can help you easily scale up the MLP layers to
multiple experts.
Rick Ho's avatar
fmoefy  
Rick Ho committed
39
40

```python
Rick Ho's avatar
Rick Ho committed
41
42
model = ...

Rick Ho's avatar
fmoefy  
Rick Ho committed
43
44
from fmoe.megatron import fmoefy
model = fmoefy(model, num_experts=<number of experts per worker>)
Rick Ho's avatar
Rick Ho committed
45
46

train(model, ...)
Rick Ho's avatar
fmoefy  
Rick Ho committed
47
48
```

Rick Ho's avatar
Rick Ho committed
49
50
51
A detailed tutorial to _moefy_ Megatron-LM can be found
[here](examples/megatron).

Rick Ho's avatar
Rick Ho committed
52
53
### Using Fast MoE as a PyTorch module

Rick Ho's avatar
Rick Ho committed
54
55
56
An example MoE transformer model can be seen in the
[Transformer-XL](examples/transformer-xl) example. The easist way is to replace
the MLP layer by the `FMoE` layers.
Rick Ho's avatar
Rick Ho committed
57
58
59

### Using Fast MoE in Parallel

Rick Ho's avatar
Rick Ho committed
60
For data parallel, no extra coding is needed.
Rick Ho's avatar
Rick Ho committed
61

Rick Ho's avatar
Rick Ho committed
62
63
64
65
For expert parallel, in which experts are located separately across workers, 
which requires sophiscated data-parallel strategies that neither PyTorch nor
Megatron-LM provides. The `fmoe.DistributedGroupedDataParallel` module is
introduced to replace PyTorch's DDP module.