For data parallel, no extra coding is needed. FastMoE works seamlessly with PyTorch's `DataParallel` or `DistributedDataParallel`.
The only drawback of data parallel is that the number of experts is constrained by each worker's memory.
#### Expert Parallel (also called Model Parlallel in some previous versions)
In FastMoE's expert parallel mode, the gate network is still replicated on each worker but
experts are placed separately across workers.
Thus, by introducing additional communication cost, FastMoE enjoys a large expert pool whose size is proportional to the number of workers.
The following figure shows the forward pass of a 6-expert MoE with 2-way model parallel. Note that experts 1-3 are located in worker 1 while experts 4-6 are located in worker 2.
FastMoE's expert parallel requires sophiscated parallel strategies that neither
PyTorch nor Megatron-LM provided when FastMoE was created. The
`fmoe.DistributedGroupedDataParallel` module is introduced to replace PyTorch's
DDP module.
#### Faster Performance Features
From a PPoPP'22 paper, _FasterMoE: modeling and optimizing training of
large-scale dynamic pre-trained models_, we have adopted techniques to make
FastMoE's model parallel much more efficient.
These optimizations are named as **Faster Performance Features**, and can be
enabled via several environment variables. Their usage and constraints are
detailed in [a separate document](doc/fastermoe).
## Citation
For the core FastMoE system.
```
```
@article{he2021fastmoe,
## 测试
title={FastMoE: A Fast Mixture-of-Expert Training System},
author={Jiaao He and Jiezhong Qiu and Aohan Zeng and Zhilin Yang and Jidong Zhai and Jie Tang},
journal={arXiv preprint arXiv:2103.13262},
year={2021}
}
```
```
source /usr/local/bin/fastpt -e
cd fastmoe/tests
pytest vs
For the [faster performance features](doc/fastermoe).
```
@inproceedings{he2022fastermoe,
author = {He, Jiaao and Zhai, Jidong and Antunes, Tiago and Wang, Haojie and Luo, Fuwen and Shi, Shangfeng and Li, Qin},
title = {FasterMoE: Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models},
year = {2022},
isbn = {9781450392044},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3503221.3508418},
doi = {10.1145/3503221.3508418},
booktitle = {Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},
pages = {120–134},
numpages = {15},
keywords = {parallelism, distributed deep learning, performance modeling},
location = {Seoul, Republic of Korea},
series = {PPoPP '22}
}
```
```
## Troubleshootings / Discussion
If you have any problem using FastMoE, or you are interested in getting involved in developing FastMoE, feel free to join [our slack channel](https://join.slack.com/t/fastmoe/shared_invite/zt-mz0ai6ol-ggov75D62YsgHfzShw8KYw).