- 03 Jun, 2020 1 commit
-
-
rohithkrn authored
* bfloat16 support for apex DDP * enable mgpu tests for fp16 and bf16 * update Dockerfile
-
- 26 May, 2020 1 commit
-
-
rohithkrn authored
-
- 21 May, 2020 2 commits
-
-
lcskrishna authored
-
sunway513 authored
-
- 20 May, 2020 2 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
- 19 May, 2020 4 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
- 15 May, 2020 2 commits
- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 13 May, 2020 1 commit
-
-
rohithkrn authored
-
- 07 May, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* fix dropout scaling from p to 1/(1-p) (#816) Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com> * Improvements to apex.mlp (#804) * update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option * enable wider load/store for multi_tensor_apply kernels (#763) * modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load * Changes to make xentropysoftmax load/store vectorized when possible: (#725) * Changes to make xentropysoftmax load/store vectorized when possible: Increase default ILP so that each thread handle 16 Bytes data in one step Make thread load/store longest vector possible Make unroll case handle adjacent data instead of strided...
-
- 30 Apr, 2020 1 commit
-
-
Deyu Fu authored
* update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option
-
- 22 Apr, 2020 2 commits
-
-
Deyu Fu authored
-
Vinicius Reis authored
The LARC optimizer wraps an underlying optimizer and then needs to be passed to amp.initialize for mixed precision. There were 3 different crashes happening in this situation, fix all of them and add a unit test. I don't know if the 'LARC' in sys.modules check ever worked. In my setup, the entry in sys.modules is 'apex.parallel.LARC'. Checking if the variable is defined seems more reliable though.
-
- 31 Mar, 2020 1 commit
-
-
Jeff Bowles authored
-
- 27 Feb, 2020 1 commit
-
-
mcarilli authored
* NHWC support for multi tensor apply * compilation fix for version<=1.4
-
- 06 Nov, 2019 1 commit
-
-
jjsjann123 authored
-
- 03 Oct, 2019 1 commit
-
-
ptrblck authored
* increase atol for Half-Float comparison to 1.5e-4 * disable tests for different opt_levels * reset atol * add bitwise accurate comparison
-
- 06 Sep, 2019 1 commit
-
-
mcarilli authored
* Pushing for build tests * Contrib files * Removing deprecated checks
-
- 03 Sep, 2019 1 commit
-
-
Deyu Fu authored
* move import of amp_C to __init__() * make fp16/32 separate lists to support mixed param types, disable double test * make zero_grad consistent between adam/novograd/lamb
-
- 27 Aug, 2019 1 commit
-
-
ptrblck authored
* add state_dict, load_state_dict * add test_restoring, test_loss_scale_decrease * disable amp outputs for checkpoint tests * add test for amp.state_dict, cleanup * add state_dict patch, add test * fixed testing, cleanup * add readme for checkpointing * add docs to source/amp * add review changes to doc
-
- 17 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 15 Aug, 2019 1 commit
-
-
Christian Clauss authored
-
- 13 Aug, 2019 2 commits
-
-
Deyu Fu authored
FusedSGD now work as before FusedAdam now work with o1/o2, no longer fuse scaling and casting Removed special backend handling for FusedAdam Moved and updated test for FusedAdam into run_optimizers Removed legacy tests for optimizers.FP16_optimizer and FusedAdam in run_mixed_adam
-
Marek Kolodziej authored
Co-authored-by:
Aditya Agrawal <aditya.iitb@gmail.com> Co-authored-by:
Marek Kolodziej <mkolod@gmail.com>
-
- 12 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 08 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 06 Aug, 2019 1 commit
-
-
ngimel authored
* Bug fix for non-affine layer-norm + add backward unit test * clean up tests and add tests for a large batch
-
- 26 Jul, 2019 1 commit
-
-
jjsjann123 authored
fixing empty return from python implementation adding proper test to verify functional correctness for python implementation
-
- 12 Jul, 2019 1 commit
-
-
jjsjann123 authored
fixing empty return from python implementation adding proper test to verify functional correctness for python implementation
-
- 03 Jul, 2019 3 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 31 May, 2019 1 commit
-
-
mcarilli authored
* Existing tests passing, still need to add per-tensor tests * Test is passing, still need to measure performance * ILP for l2norm functor
-
- 27 May, 2019 2 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-