Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
FastMoE
Commits
774071bf
Commit
774071bf
authored
Jan 25, 2021
by
Rick Ho
Browse files
update readme
parent
7e5b10b6
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
29 additions
and
1 deletion
+29
-1
README.md
README.md
+29
-1
No files found.
README.md
View file @
774071bf
Fast MoE
Fast MoE
===
===
## Introduction
An easy-to-use but efficient implementation of the Mixture of Experts (MoE)
An easy-to-use but efficient implementation of the Mixture of Experts (MoE)
model for PyTorch
model for PyTorch.
## Installation
PyTorch with CUDA is supported. The repository is currently tested with PyTorch
v1.6.0 and CUDA 10, with designed compatibility to other versions.
Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use
`python setup.py install`
to easily install and enjoy
using Fast MoE for training.
## Usage
### Using Fast MoE as a PyTorch module
Examples can be seen in
[
examples
](
examples/
)
. The easist way is to replace the
feed forward layer by the
`FMoE`
layer.
### Using Fast MoE in Parallel
For data parallel, nothing else is needed.
For expert parallel, in which experts are located separately across workers,
NCCL and MPI backend are required to be built with PyTorch. Use environment
variable
`USE_NCCL=1`
to
`setup.py`
to enable distributing experts across
workers. Note that the arguments of the MoE layers should then be excluded from
the data parallel parameter synchronization list.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment