- 19 Oct, 2021 1 commit
-
-
Hubert Lu authored
-
- 08 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* run backward * remove custom_fwd/custom_bwd
-
- 06 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* [ColumnParallelLinear] Test behavior in autocast * fix test * casts manually to autocast dtype
-
- 02 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
Co-authored-by:
Piotr Bialecki <pbialecki@nvidia.com> Co-authored-by:
Eddie Yan <eddiey@nvidia.com> Co-authored-by:
Rishi Puri <riship@nvidia.com> Co-authored-by:
Sangkug Lym <slym@nvidia.com>
-
- 15 Apr, 2021 1 commit
-
-
Sudhakar Singh authored
* Add unit tests for fused-novograd * Fix: tensors should reside on the same device * Fix: Cudastream should be called on the same device on which the tensors reside on. Found this during debugging fused novograd multi-device unit test * fixed issues mentioned in the comments
-
- 25 Jan, 2021 1 commit
-
-
Jeff Daily authored
- incorrect use of __shfl_down - fix warp size assumptions - update unit tests to exit on failure
-
- 21 Jan, 2021 1 commit
-
-
Jeff Daily authored
use __launch_bounds__(1024) for multi_tensor_apply, re-enable skipped tests
-
- 18 Jan, 2021 1 commit
-
-
Jeff Daily authored
-
- 15 Jan, 2021 1 commit
-
-
Sarunya Pumma authored
-
- 31 Dec, 2020 2 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
- 01 Dec, 2020 1 commit
-
-
Kexin Yu authored
DistributedFusedAdam Model Parallelism Support (Megatron) Co-authored-by:
Kexin Yu <kexiny@nvidia.com> Co-authored-by:
Kexin Yu <kexinznzn@gmail.com>
-
- 04 Nov, 2020 1 commit
-
-
Ashish Farmer authored
* fix warp size in WARP_SHFL* in layernorm * enable fused_layer_norm tests on ROCm
-
- 05 Aug, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
* enable mlp cuda * add setup changes and tests * skip the unit tests * updated conditions for empty array * removed hip platform conditions
-
ngimel authored
* add device guards to the optimizers * add untracked file * set deviceGuard in multi_tensor_apply * address review comments; fix lamb * indent * typo
-
- 31 Jul, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
-
- 10 Jul, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* Enable sync batchnorm * enable syncbn properly * update the unit tests * update tests * update conditions for welford_merge_element * updated conditions based on comments.
-
- 07 Jul, 2020 1 commit
-
-
lcskrishna authored
-
- 06 Jul, 2020 1 commit
-
-
jjsjann123 authored
* [sync BN] support non-uniform batch size across process group. TODO: test should be added once cleaned up. * updating unit tests * new unit tests for different inputs * cleaning
-
- 23 Jun, 2020 3 commits
- 03 Jun, 2020 1 commit
-
-
rohithkrn authored
* bfloat16 support for apex DDP * enable mgpu tests for fp16 and bf16 * update Dockerfile
-
- 26 May, 2020 1 commit
-
-
rohithkrn authored
-
- 21 May, 2020 2 commits
-
-
lcskrishna authored
-
sunway513 authored
-
- 20 May, 2020 2 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
- 19 May, 2020 4 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
- 15 May, 2020 2 commits
- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 13 May, 2020 1 commit
-
-
rohithkrn authored
-
- 07 May, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* fix dropout scaling from p to 1/(1-p) (#816) Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com> * Improvements to apex.mlp (#804) * update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option * enable wider load/store for multi_tensor_apply kernels (#763) * modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load * Changes to make xentropysoftmax load/store vectorized when possible: (#725) * Changes to make xentropysoftmax load/store vectorized when possible: Increase default ILP so that each thread handle 16 Bytes data in one step Make thread load/store longest vector possible Make unroll case handle adjacent data instead of strided...
-
- 30 Apr, 2020 1 commit
-
-
Deyu Fu authored
* update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option
-
- 22 Apr, 2020 2 commits
-
-
Deyu Fu authored
-
Vinicius Reis authored
The LARC optimizer wraps an underlying optimizer and then needs to be passed to amp.initialize for mixed precision. There were 3 different crashes happening in this situation, fix all of them and add a unit test. I don't know if the 'LARC' in sys.modules check ever worked. In my setup, the entry in sys.modules is 'apex.parallel.LARC'. Checking if the variable is defined seems more reliable though.
-