README.md 2.78 KB
Newer Older
Rick Ho's avatar
Rick Ho committed
1
2
Fast MoE
===
Zhilin Yang's avatar
init  
Zhilin Yang committed
3

Rick Ho's avatar
Rick Ho committed
4
5
## Introduction

Rick Ho's avatar
Rick Ho committed
6
An easy-to-use but efficient implementation of the Mixture of Experts (MoE) 
Rick Ho's avatar
Rick Ho committed
7
8
9
10
model for PyTorch. 

## Installation

11
12
13
### Prerequisites

PyTorch with CUDA is required. The repository is currently tested with PyTorch
Rick Ho's avatar
Rick Ho committed
14
15
v1.6.0 and CUDA 10, with designed compatibility to other versions.

16
17
18
19
20
21
22
23
If distributed version is enabled, NCCL with P2P communication support,
typically versions >= 2.7.5 is needed. Note that the MPI backend is used as
there are some necessary messages to be passed by MPI in FMoE, the backend
should be `mpi`. However, as there are other data to be synchronized by
`torch.distributed`, cuda-aware mpi is required.

### Installing

Rick Ho's avatar
Rick Ho committed
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use `python setup.py install` to easily install and enjoy
using Fast MoE for training.

## Usage 

### Using Fast MoE as a PyTorch module

Examples can be seen in [examples](examples/). The easist way is to replace the
feed forward layer by the `FMoE` layer.

### Using Fast MoE in Parallel

For data parallel, nothing else is needed.

For expert parallel, in which experts are located separately across workers,
NCCL and MPI backend are required to be built with PyTorch. Use environment
variable `USE_NCCL=1` to `setup.py` to enable distributing experts across
workers. Note that the arguments of the MoE layers should then be excluded from
the data parallel parameter synchronization list.
Rick Ho's avatar
Rick Ho committed
44
45
46

## Feature Roadmap

Rick Ho's avatar
Rick Ho committed
47
48
49
50
51
52
53
54
55
56
### Support NCCL backend

Currently, fmoe depends on MPI to exchange the count of experts before using
NCCL p2p communication function to exchange features. As an NCCL communicator
can be established throught MPI, while MPI has to be initiated, The PyTorch
distributed module has to be initialzied by MPI backend. However, this limits
the capibility to use half tensors and conduct other computation. Therefore, a
solution will be appreciated if we can ue PyTorch's NCCL backend while passing
our mendatary information efficiently.

Rick Ho's avatar
Rick Ho committed
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
### Better All-to-all communication efficiency and computation performance

The dispatching process from source worker to the expert is time-consuming and
topology-aware, as it is an all-to-all communication. Overlapping or other
communication reducition technologies can be applied to reduce the overhead of
this step. However, this demands much research and coding efforts.

### Dynamic expert distribution load balancing

Load imbalance is observed as there is no loss item about load balancing. Some
experts are significantly more frequently called. Therefore, a dynamic scheduler
to duplicate or recycle some experts on some workers may be effective.

### Model parallel the experts

To enable larger expert sizes. 

### Use zero-optimizer to reduce memory consumption

Rick Ho's avatar
Rick Ho committed
76
77
### Intigrate top-k gate into local scatter gather

Rick Ho's avatar
Rick Ho committed
78