- 19 May, 2020 6 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
Shaden Smith authored
* BERT title
-
Shaden Smith authored
-
Shaden Smith authored
-
Jeff Rasley authored
Updates for ZeRO stage 2 + ZeRO stage 1 w. RS Co-authored-by:
Tunji Ruwase <olruwase@microsoft.com> Co-authored-by:
Samyam Rajbhandari <samyamr@microsoft.com> Co-authored-by:
Shaden Smith <ShadenTSmith@gmail.com> Co-authored-by:
Elton Zheng <eltonz@microsoft.com> Co-authored-by:
Shaden Smith <Shaden.Smith@microsoft.com> Co-authored-by:
yuxionghe <yuxhe@microsoft.com> Co-authored-by:
Arash Ashari <arashari@microsoft.com>
-
- 18 May, 2020 1 commit
-
-
Arash Ashari authored
* adding BingSqaud e2e test * updating the draft test; bring final step under try section * finalizinf test for base deepspeed and deepspeed with ZeRO * applying the comment (thanks Jeff); fixed formatting
-
- 15 May, 2020 1 commit
-
-
Shaden Smith authored
-
- 13 May, 2020 1 commit
-
-
Jeff Rasley authored
-
- 12 May, 2020 1 commit
-
-
Shaden Smith authored
-
- 11 May, 2020 1 commit
-
-
Olatunji Ruwase authored
* Support dynamic loss scale args in fp16 optimizers * Update names
-
- 06 May, 2020 2 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
- 05 May, 2020 1 commit
-
-
Jeff Rasley authored
* add basic post-install test
-
- 04 May, 2020 1 commit
-
-
Jeff Rasley authored
-
- 30 Apr, 2020 2 commits
-
-
Jeff Rasley authored
* update apex version to feb 5th commit * use gradient clipping instead of max grad norm in tests * add warning when user provides max_grad_norm * update examples commit
-
Jeff Rasley authored
-
- 29 Apr, 2020 1 commit
-
-
Samyam Rajbhandari authored
1) CSR parameter names should end with .weight. 2) When using basic optimizer directly, DeepSpeed should handle zero_grad. Letting the basic optimizer do the zero_grad resulted in residual gradients in the embedding layer due to unknown reasons.
-
- 27 Apr, 2020 1 commit
-
-
Shaden Smith authored
-
- 25 Apr, 2020 1 commit
-
-
Jeff Rasley authored
Remove explicit torch version requirement so that we can more easily support other versions
-
- 24 Apr, 2020 1 commit
-
-
Olatunji Ruwase authored
-
- 22 Apr, 2020 2 commits
-
-
Shaden Smith authored
-
Shaden Smith authored
-
- 21 Apr, 2020 1 commit
-
-
Olatunji Ruwase authored
Co-authored-by:Shaden Smith <Shaden.Smith@microsoft.com>
-
- 20 Apr, 2020 1 commit
-
-
marload authored
-
- 16 Apr, 2020 1 commit
-
-
Jeff Rasley authored
-
- 12 Apr, 2020 1 commit
-
-
Samyam Rajbhandari authored
-
- 10 Apr, 2020 1 commit
-
-
Shaden Smith authored
-
- 09 Apr, 2020 1 commit
-
-
Jeff Rasley authored
-
- 07 Apr, 2020 1 commit
-
-
marload authored
-
- 06 Apr, 2020 1 commit
-
-
Shaden Smith authored
-
- 03 Apr, 2020 1 commit
-
-
kouml authored
-
- 28 Mar, 2020 1 commit
-
-
Shaden Smith authored
-
- 27 Mar, 2020 2 commits
-
-
Olatunji Ruwase authored
* Push to remote * Correctly handle multi output models by doing loss scaling in backward() Unit tests for multi output models * Fix formatting issues * Formatting issues fix * Fix formatting * Update DeepSpeedExamples submodule Enable Megatron model tests
-
Calogero Zarbo authored
* added zero_allow_untested_optimizer flag helpers * add zero_allow_untested_optimizer config constants * zero_allow_untested_optimizer logic with assertion * Added unit test and CustomOptimizer helper class
-
- 26 Mar, 2020 1 commit
-
-
Shaden Smith authored
-
- 25 Mar, 2020 1 commit
-
-
Shaden Smith authored
-
- 23 Mar, 2020 1 commit
-
-
Olatunji Ruwase authored
-
- 22 Mar, 2020 2 commits
-
-
Calogero Zarbo authored
-
kouml authored
* remove session_params in deepspeed_constants.py * add constants info at README.md
-