Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
FastMoE
Commits
01464726
Commit
01464726
authored
Feb 21, 2021
by
Jiezhong Qiu
Browse files
Merge remote-tracking branch 'origin/master' into bias
parents
3d8e8a43
ed9277f9
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
11 additions
and
2 deletions
+11
-2
README.md
README.md
+10
-2
setup.py
setup.py
+1
-0
No files found.
README.md
View file @
01464726
...
...
@@ -22,8 +22,16 @@ Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use
`python setup.py install`
to easily install and enjoy
using Fast MoE for training.
The distributed expert feature is enabled by default. If you want to disable
it, pass environment variable
`USE_NCCL=0`
to the setup script.
The distributed expert feature is disabled by default. If you want to disable
it, pass environment variable
`USE_NCCL=1`
to the setup script.
Note that an extra NCCL developer package is needed, which has to be consistant
with your PyTorch's NCCL version, which can be inspected by running
`torch.cuda.nccl.version()`
. The
[
official PyTorch docker image
](
)
is
recommended, as the environment is well-setup there. Otherwise, you can access
the
[
download link of all NCCL
versions
](
https://developer.nvidia.com/nccl/nccl-legacy-downloads
)
to download
the NCCL package that is suitable for you.
## Usage
...
...
setup.py
View file @
01464726
...
...
@@ -10,6 +10,7 @@ cxx_flags = [
ext_libs
=
[]
if
os
.
environ
.
get
(
'USE_NCCL'
,
'0'
)
==
'1'
:
cxx_flags
.
append
(
'-DMOE_USE_NCCL'
)
ext_libs
.
append
(
'nccl'
)
if
__name__
==
'__main__'
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment