- 25 Dec, 2020 1 commit
-
-
mohammad authored
-
- 23 Dec, 2020 1 commit
-
-
Deepak Narayanan authored
Checkpoint should be saved only after evaluation pass is run to make sure validation losses are identical after loading checkpoint
-
- 22 Dec, 2020 1 commit
-
-
mohammad authored
Add the option for fp32 residual connection (fp32 residual connection machinery still needs to be added)
-
- 19 Dec, 2020 16 commits
-
-
mohammad authored
-
mohammad authored
-
mshoeybi authored
-
mshoeybi authored
-
mshoeybi authored
-
mshoeybi authored
-
Jared Casper authored
-
Jared Casper authored
-
Jared Casper authored
-
Deepak Narayanan authored
-
mohammad authored
-
mohammad authored
-
mohammad authored
-
mohammad authored
-
mohammad authored
Rename --batch-size to --micro-batch-size and drop in-minibatch from --num-micro-batches-in-minibatch
-
Jared Casper authored
-
- 02 Dec, 2020 1 commit
-
-
mohammad authored
-
- 30 Nov, 2020 1 commit
-
-
mohammad authored
-
- 28 Nov, 2020 1 commit
-
-
mohammad authored
-
- 26 Nov, 2020 1 commit
-
-
mohammad authored
-
- 12 Nov, 2020 17 commits
-
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
Refactor code according to Jared's comments: move pipelining and non-pipelining training loops into separate methods Also, use mpu.get_*_model_parallel_size() instead of args.*_model_parallel_size
-
mshoeybi authored
Allocate tensor in `communicate()` method directly on GPU (instead of allocating on CPU and then moving to GPU)
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
Prevents data_loader from running out of training examples
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
-
Deepak Narayanan authored
Make sure all forward and backward operations are accounted for
-