- 20 May, 2020 2 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
- 19 May, 2020 7 commits
-
-
-
Peng authored
Introduce new optimization levels for BFloat16 training
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
lcskrishna authored
-
- 18 May, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
-
- 15 May, 2020 4 commits
-
-
Ashish Farmer authored
[Upstream] IFU 05/15/2020
-
Chaitanya Sri Krishna Lolla authored
-
rohithkrn authored
-
rohithkrn authored
-
- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 13 May, 2020 3 commits
-
-
Andrew Sears authored
Signed-off-by:asears <asears@users.noreply.github.com>
-
rohithkrn authored
-
rohithkrn authored
-
- 12 May, 2020 5 commits
-
-
Chaitanya Sri Krishna Lolla authored
-
Thor Johnsen authored
Reversible fused adam with mt support
-
rohithkrn authored
-
rohithkrn authored
-
Thor Johnsen authored
-
- 11 May, 2020 1 commit
-
-
rohithkrn authored
-
- 09 May, 2020 1 commit
-
-
rohithkrn authored
-
- 08 May, 2020 3 commits
-
-
Thor Johnsen authored
-
rohithkrn authored
-
rohithkrn authored
-
- 07 May, 2020 5 commits
-
-
Chaitanya Sri Krishna Lolla authored
-
Thor Johnsen authored
-
Chaitanya Sri Krishna Lolla authored
-
Chaitanya Sri Krishna Lolla authored
* fix dropout scaling from p to 1/(1-p) (#816) Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com> * Improvements to apex.mlp (#804) * update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option * enable wider load/store for multi_tensor_apply kernels (#763) * modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load * Changes to make xentropysoftmax load/store vectorized when possible: (#725) * Changes to make xentropysoftmax load/store vectorized when possible: Increase default ILP so that each thread handle 16 Bytes data in one step Make thread load/store longest vector possible Make unroll case handle adjacent data instead of strided...
-
Thor Johnsen authored
-
- 06 May, 2020 3 commits
-
-
Thor Johnsen authored
-
Thor Johnsen authored
-
Thor Johnsen authored
-
- 05 May, 2020 1 commit
-
-
Thor Johnsen authored
-
- 04 May, 2020 1 commit
-
-
Thor Johnsen authored
-
- 01 May, 2020 1 commit
-
-
Deyu Fu authored
* Changes to make xentropysoftmax load/store vectorized when possible: Increase default ILP so that each thread handle 16 Bytes data in one step Make thread load/store longest vector possible Make unroll case handle adjacent data instead of strided, so same order compare to vector case * Add shift for not aligned case. Remove less than 16 bytes aligned access
-
- 30 Apr, 2020 1 commit
-
-
Deyu Fu authored
* modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load
-