Commit ed9277f9 authored by Rick Ho's avatar Rick Ho
Browse files

update nccl installation method and write readme

parent 406955e7
...@@ -22,8 +22,16 @@ Fast MoE contains a set of PyTorch customized opearators, including both C and ...@@ -22,8 +22,16 @@ Fast MoE contains a set of PyTorch customized opearators, including both C and
Python components. Use `python setup.py install` to easily install and enjoy Python components. Use `python setup.py install` to easily install and enjoy
using Fast MoE for training. using Fast MoE for training.
The distributed expert feature is enabled by default. If you want to disable The distributed expert feature is disabled by default. If you want to disable
it, pass environment variable `USE_NCCL=0` to the setup script. it, pass environment variable `USE_NCCL=1` to the setup script.
Note that an extra NCCL developer package is needed, which has to be consistant
with your PyTorch's NCCL version, which can be inspected by running
`torch.cuda.nccl.version()`. The [official PyTorch docker image]() is
recommended, as the environment is well-setup there. Otherwise, you can access
the [download link of all NCCL
versions](https://developer.nvidia.com/nccl/nccl-legacy-downloads) to download
the NCCL package that is suitable for you.
## Usage ## Usage
......
...@@ -10,6 +10,7 @@ cxx_flags = [ ...@@ -10,6 +10,7 @@ cxx_flags = [
ext_libs = [] ext_libs = []
if os.environ.get('USE_NCCL', '0') == '1': if os.environ.get('USE_NCCL', '0') == '1':
cxx_flags.append('-DMOE_USE_NCCL') cxx_flags.append('-DMOE_USE_NCCL')
ext_libs.append('nccl')
if __name__ == '__main__': if __name__ == '__main__':
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment