Unverified Commit a4119185 authored by Yimin Jiang's avatar Yimin Jiang Committed by GitHub
Browse files

Fix typo in readme

parent 11369d67
...@@ -28,7 +28,7 @@ using FastMoE for training. ...@@ -28,7 +28,7 @@ using FastMoE for training.
The distributed expert feature is disabled by default. If you want to enable The distributed expert feature is disabled by default. If you want to enable
it, pass environment variable `USE_NCCL=1` to the setup script. it, pass environment variable `USE_NCCL=1` to the setup script.
Note that an extra NCCL developer package is needed, which has to be consistant Note that an extra NCCL developer package is needed, which has to be consistent
with your PyTorch's NCCL version, which can be inspected by running with your PyTorch's NCCL version, which can be inspected by running
`torch.cuda.nccl.version()`. The `torch.cuda.nccl.version()`. The
[official PyTorch docker image](https://hub.docker.com/r/pytorch/pytorch) is [official PyTorch docker image](https://hub.docker.com/r/pytorch/pytorch) is
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment