- 28 May, 2020 2 commits
-
-
Chunyang Wen authored
* fix: typo in code docs * more pythonic code
-
Chunyang Wen authored
Co-authored-by:
Shaden Smith <Shaden.Smith@microsoft.com> Co-authored-by:
Jeff Rasley <jerasley@microsoft.com>
-
- 27 May, 2020 5 commits
-
-
Jeff Rasley authored
-
Jeff Rasley authored
* add support for predivide as a flag * add predivide json config, remove allgather_disable (as it's not currently used anymore)
-
Samyam Rajbhandari authored
Contiguous Gradients should be set to false by default. Its not useful unless the model is very large
-
Samyam Rajbhandari authored
* Fix for CPU memory Bloating Issue caused by pyorch backward graph creation in allgather. Fixed by calling detach on tensors before calling all_gather * Fix for CPU memory Bloating Issue caused by pyorch backward graph creation in allgather. Fixed by calling detach on tensors before calling all_gather * Fix for CPU memory Bloating Issue caused by pyorch backward graph creation in allgather. Fixed by calling detach on tensors before calling all_gather
-
Jeff Rasley authored
* updates to support fp32 grad clipping and disable max_grad_norm
-
- 26 May, 2020 1 commit
-
-
Shaden Smith authored
-
- 25 May, 2020 1 commit
-
-
Chunyang Wen authored
-
- 21 May, 2020 2 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
- 20 May, 2020 1 commit
-
-
Jeff Rasley authored
-
- 19 May, 2020 6 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
Shaden Smith authored
* BERT title
-
Shaden Smith authored
-
Shaden Smith authored
-
Jeff Rasley authored
Updates for ZeRO stage 2 + ZeRO stage 1 w. RS Co-authored-by:
Tunji Ruwase <olruwase@microsoft.com> Co-authored-by:
Samyam Rajbhandari <samyamr@microsoft.com> Co-authored-by:
Shaden Smith <ShadenTSmith@gmail.com> Co-authored-by:
Elton Zheng <eltonz@microsoft.com> Co-authored-by:
Shaden Smith <Shaden.Smith@microsoft.com> Co-authored-by:
yuxionghe <yuxhe@microsoft.com> Co-authored-by:
Arash Ashari <arashari@microsoft.com>
-
- 18 May, 2020 1 commit
-
-
Arash Ashari authored
* adding BingSqaud e2e test * updating the draft test; bring final step under try section * finalizinf test for base deepspeed and deepspeed with ZeRO * applying the comment (thanks Jeff); fixed formatting
-
- 15 May, 2020 1 commit
-
-
Shaden Smith authored
-
- 13 May, 2020 1 commit
-
-
Jeff Rasley authored
-
- 12 May, 2020 1 commit
-
-
Shaden Smith authored
-
- 11 May, 2020 1 commit
-
-
Olatunji Ruwase authored
* Support dynamic loss scale args in fp16 optimizers * Update names
-
- 06 May, 2020 2 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
- 05 May, 2020 1 commit
-
-
Jeff Rasley authored
* add basic post-install test
-
- 04 May, 2020 1 commit
-
-
Jeff Rasley authored
-
- 30 Apr, 2020 2 commits
-
-
Jeff Rasley authored
* update apex version to feb 5th commit * use gradient clipping instead of max grad norm in tests * add warning when user provides max_grad_norm * update examples commit
-
Jeff Rasley authored
-
- 29 Apr, 2020 1 commit
-
-
Samyam Rajbhandari authored
1) CSR parameter names should end with .weight. 2) When using basic optimizer directly, DeepSpeed should handle zero_grad. Letting the basic optimizer do the zero_grad resulted in residual gradients in the embedding layer due to unknown reasons.
-
- 27 Apr, 2020 1 commit
-
-
Shaden Smith authored
-
- 25 Apr, 2020 1 commit
-
-
Jeff Rasley authored
Remove explicit torch version requirement so that we can more easily support other versions
-
- 24 Apr, 2020 1 commit
-
-
Olatunji Ruwase authored
-
- 22 Apr, 2020 2 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
- 21 Apr, 2020 1 commit
-
-
Olatunji Ruwase authored
Co-authored-by:Shaden Smith <Shaden.Smith@microsoft.com>
-
- 20 Apr, 2020 1 commit
-
-
marload authored
-
- 16 Apr, 2020 1 commit
-
-
Jeff Rasley authored
-
- 12 Apr, 2020 1 commit
-
-
Samyam Rajbhandari authored
-
- 10 Apr, 2020 1 commit
-
-
Shaden Smith authored
-