README.md 2.02 KB
Newer Older
Rick Ho's avatar
Rick Ho committed
1
2
Fast MoE
===
Zhilin Yang's avatar
init  
Zhilin Yang committed
3

Rick Ho's avatar
Rick Ho committed
4
5
## Introduction

Rick Ho's avatar
Rick Ho committed
6
An easy-to-use but efficient implementation of the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
7
8
9
10
model for PyTorch. 

## Installation

11
12
13
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
Rick Ho's avatar
Rick Ho committed
14
15
v1.6.0 and CUDA 10, with designed compatibility to other versions.

16
If distributed version is enabled, NCCL with P2P communication support,
Rick Ho's avatar
Rick Ho committed
17
typically versions >= 2.7.5 is needed. 
18
19
20

### Installing

Rick Ho's avatar
Rick Ho committed
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use `python setup.py install` to easily install and enjoy
using Fast MoE for training.

## Usage 

### Using Fast MoE as a PyTorch module

Examples can be seen in [examples](examples/). The easist way is to replace the
feed forward layer by the `FMoE` layer.

### Using Fast MoE in Parallel

For data parallel, nothing else is needed.

For expert parallel, in which experts are located separately across workers,
Rick Ho's avatar
Rick Ho committed
37
38
39
40
NCCL backend is required to be built with PyTorch. Use environment variable
`USE_NCCL=1` to `setup.py` to enable distributing experts across workers. Note
that the arguments of the MoE layers should then be excluded from the data
parallel parameter synchronization list.
Rick Ho's avatar
Rick Ho committed
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62

## Feature Roadmap

### Better All-to-all communication efficiency and computation performance

The dispatching process from source worker to the expert is time-consuming and
topology-aware, as it is an all-to-all communication. Overlapping or other
communication reducition technologies can be applied to reduce the overhead of
this step. However, this demands much research and coding efforts.

### Dynamic expert distribution load balancing

Load imbalance is observed as there is no loss item about load balancing. Some
experts are significantly more frequently called. Therefore, a dynamic scheduler
to duplicate or recycle some experts on some workers may be effective.

### Model parallel the experts

To enable larger expert sizes. 

### Use zero-optimizer to reduce memory consumption

Rick Ho's avatar
Rick Ho committed
63
64
### Intigrate top-k gate into local scatter gather

Rick Ho's avatar
Rick Ho committed
65