Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
FastMoE
Commits
ad07f07a
Commit
ad07f07a
authored
Jan 29, 2021
by
Rick Ho
Browse files
remove mpi in readme
parent
72e9bc9e
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
18 deletions
+5
-18
README.md
README.md
+5
-18
No files found.
README.md
View file @
ad07f07a
...
@@ -14,10 +14,7 @@ PyTorch with CUDA is required. The repository is currently tested with PyTorch
...
@@ -14,10 +14,7 @@ PyTorch with CUDA is required. The repository is currently tested with PyTorch
v1.6.0 and CUDA 10, with designed compatibility to other versions.
v1.6.0 and CUDA 10, with designed compatibility to other versions.
If distributed version is enabled, NCCL with P2P communication support,
If distributed version is enabled, NCCL with P2P communication support,
typically versions >= 2.7.5 is needed. Note that the MPI backend is used as
typically versions >= 2.7.5 is needed.
there are some necessary messages to be passed by MPI in FMoE, the backend
should be
`mpi`
. However, as there are other data to be synchronized by
`torch.distributed`
, cuda-aware mpi is required.
### Installing
### Installing
...
@@ -37,23 +34,13 @@ feed forward layer by the `FMoE` layer.
...
@@ -37,23 +34,13 @@ feed forward layer by the `FMoE` layer.
For data parallel, nothing else is needed.
For data parallel, nothing else is needed.
For expert parallel, in which experts are located separately across workers,
For expert parallel, in which experts are located separately across workers,
NCCL
and MPI
backend
are
required to be built with PyTorch. Use environment
NCCL backend
is
required to be built with PyTorch. Use environment
variable
variable
`USE_NCCL=1`
to
`setup.py`
to enable distributing experts across
`USE_NCCL=1`
to
`setup.py`
to enable distributing experts across
workers. Note
workers. Note
that the arguments of the MoE layers should then be excluded from
that the arguments of the MoE layers should then be excluded from
the data
the data
parallel parameter synchronization list.
parallel parameter synchronization list.
## Feature Roadmap
## Feature Roadmap
### Support NCCL backend
Currently, fmoe depends on MPI to exchange the count of experts before using
NCCL p2p communication function to exchange features. As an NCCL communicator
can be established throught MPI, while MPI has to be initiated, The PyTorch
distributed module has to be initialzied by MPI backend. However, this limits
the capibility to use half tensors and conduct other computation. Therefore, a
solution will be appreciated if we can ue PyTorch's NCCL backend while passing
our mendatary information efficiently.
### Better All-to-all communication efficiency and computation performance
### Better All-to-all communication efficiency and computation performance
The dispatching process from source worker to the expert is time-consuming and
The dispatching process from source worker to the expert is time-consuming and
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment