- 25 Feb, 2021 1 commit
-
-
Jeff Daily authored
This reverts commit bdd481d1.
-
- 25 Jan, 2021 1 commit
-
-
Jeff Daily authored
- incorrect use of __shfl_down - fix warp size assumptions - update unit tests to exit on failure
-
- 21 Jan, 2021 1 commit
-
-
Jeff Daily authored
use __launch_bounds__(1024) for multi_tensor_apply, re-enable skipped tests
-
- 18 Jan, 2021 1 commit
-
-
Jeff Daily authored
-
- 15 Jan, 2021 1 commit
-
-
Sarunya Pumma authored
-
- 04 Nov, 2020 1 commit
-
-
Ashish Farmer authored
* fix warp size in WARP_SHFL* in layernorm * enable fused_layer_norm tests on ROCm
-
- 19 Oct, 2020 1 commit
-
-
lly-zero-one authored
In this PR, we mainly tried to optimize the performance of Syncatchnorm and also fixed one potential issue in the welford_parallel kernel implementation. For performance improvement, we batched the mean/var/count all_gather communication together and sent it once in the forward path We also batch the all_reduce in backward path We add the contiguous call on the input of welford_parallel kernel. If there is any standard perf benchmark, I would be happy to run it.
-
- 05 Aug, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
* enable mlp cuda * add setup changes and tests * skip the unit tests * updated conditions for empty array * removed hip platform conditions
-
ngimel authored
* add device guards to the optimizers * add untracked file * set deviceGuard in multi_tensor_apply * address review comments; fix lamb * indent * typo
-
- 10 Jul, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* Enable sync batchnorm * enable syncbn properly * update the unit tests * update tests * update conditions for welford_merge_element * updated conditions based on comments.
-
- 06 Jul, 2020 1 commit
-
-
jjsjann123 authored
* [sync BN] support non-uniform batch size across process group. TODO: test should be added once cleaned up. * updating unit tests * new unit tests for different inputs * cleaning
-
- 22 Jun, 2020 1 commit
-
-
ashishfarmer authored
-
- 15 Jun, 2020 1 commit
-
-
rohithkrn authored
-
- 26 May, 2020 1 commit
-
-
rohithkrn authored
-
- 23 May, 2020 1 commit
-
-
Kexin Yu authored
-
- 22 May, 2020 5 commits
- 21 May, 2020 2 commits
-
-
Kexin Yu authored
-
Jeff Daily authored
-
- 20 May, 2020 1 commit
-
-
lcskrishna authored
-
- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 12 May, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
-
rohithkrn authored
-
- 07 May, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
-
Chaitanya Sri Krishna Lolla authored
* fix dropout scaling from p to 1/(1-p) (#816) Co-authored-by:
Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com> * Improvements to apex.mlp (#804) * update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option * enable wider load/store for multi_tensor_apply kernels (#763) * modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load * Changes to make xentropysoftmax load/store vectorized when possible: (#725) * Changes to make xentropysoftmax load/store vectorized when possible: Increase default ILP so that each thread handle 16 Bytes data in one step Make thread load/store longest vector possible Make unroll case handle adjacent data instead of strided, so same order compare to vector case * Add shift for not aligned case. Remove less than 16 bytes aligned access Co-authored-by:
Burc Eryilmaz <sberyilm@gmail.com> Co-authored-by:
Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com> Co-authored-by:
Deyu Fu <deyuf@nvidia.com>
-
- 30 Apr, 2020 3 commits
-
-
Kexin Yu authored
-
Deyu Fu authored
* modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load
-
Deyu Fu authored
* update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option
-
- 28 Apr, 2020 2 commits
-
-
Kexin Yu authored
-
Chaitanya Sri Krishna Lolla authored
* Initial commit to hipify all cuda code * enable multi_tensor_apply extension * added generatedFileCleaner to handle nested hip files
-
- 22 Apr, 2020 1 commit
-
-
Deyu Fu authored
-
- 10 Apr, 2020 1 commit
-
-
Thor Johnsen authored
-
- 27 Feb, 2020 1 commit
-
-
mcarilli authored
* NHWC support for multi tensor apply * compilation fix for version<=1.4
-
- 04 Oct, 2019 1 commit
-
-
Deyu Fu authored
* move previous fused_adam and fp16_optimizer to contrib * make build contrib.fused_adam optional * change build option name * remove unnecessary try import
-
- 06 Sep, 2019 1 commit
-
-
mcarilli authored
* Pushing for build tests * Contrib files * Removing deprecated checks
-
- 20 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 17 Aug, 2019 1 commit
-
-
Deyu Fu authored
-