README.md 1.23 KB
Newer Older
Rick Ho's avatar
Rick Ho committed
1
2
Fast MoE currently works with the `v2.0` release of
[Megatron-LM](https://github.com/nvidia/megatron-lm).
Rick Ho's avatar
Rick Ho committed
3

Rick Ho's avatar
Rick Ho committed
4
5
A [patch](moefy.patch) is used to easily enable MoE in Megatron-LM for training
Bert.
Rick Ho's avatar
Rick Ho committed
6

Rick Ho's avatar
Rick Ho committed
7
The patch works in the following way.
Rick Ho's avatar
Rick Ho committed
8

Rick Ho's avatar
Rick Ho committed
9
### Building the model
Rick Ho's avatar
Rick Ho committed
10

Rick Ho's avatar
Rick Ho committed
11
12
13
14
15
16
17
In `pretrain_bert.py`, the `fmoe.megatron.fmoefy` function is used as an
entrance to one-key introduce Fast MoE layer to replace the MLP layers in the
transformer language models.

```python
from fmoe.megatron import fmoefy
model = fmoefy(model, num_experts=4)
Rick Ho's avatar
Rick Ho committed
18
19
```

Rick Ho's avatar
Rick Ho committed
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Note that the `fmoefy` function currently only takes a standard Megatron-LM's
top-level raw model as input, i.e. the MLP layers should be available at
`model.language_model.transformer.layers[i].mlp`.

### Using expert parallellization

In `megatron/training.py`, the `LocalDDP` module is replaced by the one in 
`fmoe.megatron` to enable the sophiscated data parallel strategies that can
parallelize the experts across both the data parallel group and the (tensor) 
model parallel model group.

```python
# from megatron.model import DistributedDataParallel as LocalDDP
from fmoe.megatron import DistributedDataParallel as LocalDDP
```

### Train as usual

Start traning with Fast MoE by using the scripts provided by Megatron-LM.