- 10 Dec, 2020 1 commit
-
-
lcskrishna authored
-
- 09 Dec, 2020 2 commits
-
-
lcskrishna authored
-
lcskrishna authored
-
- 18 Aug, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* enable deprecated fused adam optimizer * enable deprecated fused lamb * enable xentropy extension * add warpsize 32 for nv and 64 for amd * update compiler arguments * update the syncwarp conditions * update syncwarp condition
-
- 17 Aug, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* enable deprecated fused adam optimizer * enable deprecated fused lamb * reset the compiler arguments * syntax error * aligning the compiler arguments
-
- 05 Aug, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* enable mlp cuda * add setup changes and tests * skip the unit tests * updated conditions for empty array * removed hip platform conditions
-
- 10 Jul, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* Enable sync batchnorm * enable syncbn properly * update the unit tests * update tests * update conditions for welford_merge_element * updated conditions based on comments.
-
- 01 Jun, 2020 1 commit
-
-
mcarilli authored
Co-authored-by:Michael Carilli <mcarilli@nvidia.com>
-
- 30 May, 2020 2 commits
-
-
Thor Johnsen authored
-
Thor Johnsen authored
-
- 29 May, 2020 1 commit
-
-
Burc Eryilmaz authored
Fuses dropout and softmax in backward pass, add bias support to CPP MHA, add additive mask support, separate Q/K/V parameters (#854) Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com>
-
- 20 May, 2020 1 commit
-
-
Jeff Daily authored
-
- 18 May, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
-
- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 07 May, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
-
- 28 Apr, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* Initial commit to hipify all cuda code * enable multi_tensor_apply extension * added generatedFileCleaner to handle nested hip files
-
- 23 Apr, 2020 1 commit
-
-
ptrblck authored
* add CUDAGenerator guard * fix generator_flag * add guards for gen pointer/ref issue * change mutex_ to mutex() * add check_generator Co-authored-by:pbialecki <pbialecki@nvidia.com>
-
- 22 Apr, 2020 1 commit
-
-
Deyu Fu authored
-
- 23 Mar, 2020 1 commit
-
-
Kexin Yu authored
-
- 20 Mar, 2020 2 commits
- 11 Mar, 2020 1 commit
-
-
ptrblck authored
* disable ninja for multihead_attn * fix getCurrentStream in multihead_attn Co-authored-by:pbialecki <pbialecki@nvidia.com>
-
- 02 Mar, 2020 1 commit
-
- 27 Feb, 2020 1 commit
-
-
mcarilli authored
* NHWC support for multi tensor apply * compilation fix for version<=1.4
-
- 25 Feb, 2020 2 commits
- 24 Feb, 2020 1 commit
-
-
Kevin Stephano authored
* Adding C++ Multihead Attention implementation to contrib. * Add reference test that at least works for forward. * Remove CublasLt support from multihead attention. * Add new Python version of self attention. * Update python model of MHA with backward pass. * Fixed Output Linear connection in MHA. * Clean up compiles and add documentation to PySelfAttention. * Add Encdec Python version of multihead attention. Cleanup files. * Tests for self and encdec multihead attention. * Add reference pytorch implementation of attention with norm and add. * Add cutlass branch definition. * Add cutlass download to compile. * Add norm/add tests. * Add biases to pytorch python versions. * Add tests and fix issues with python version of attention masking. * Create README.md * Update README.md * Update README.md * Update perf test parameters. * Update README.md * Update README.md * Update README.md * Add files via upload * Update README.md * Update README.md * Update README.md * Fix matmul1 output tensor size. Fix tests that missed issue. * Allow for Z dimensions of 64K and greater on batched GEMMs. * remove redundant imports * general cleanup, remove deprecated or unused functions
-
- 15 Feb, 2020 1 commit
-
-
Deyu Fu authored
-
- 06 Feb, 2020 1 commit
-
-
Kevin Stephano authored
* Adding C++ Multihead Attention implementation to contrib. * Add reference test that at least works for forward. * Remove CublasLt support from multihead attention. * Add new Python version of self attention. * Update python model of MHA with backward pass. * Fixed Output Linear connection in MHA. * Clean up compiles and add documentation to PySelfAttention. * Add Encdec Python version of multihead attention. Cleanup files. * Tests for self and encdec multihead attention. * Add reference pytorch implementation of attention with norm and add. * Add cutlass branch definition. * Add cutlass download to compile. * Add norm/add tests. * Add biases to pytorch python versions. * Add tests and fix issues with python version of attention masking. * Create README.md * Update README.md * Update README.md * Update perf test parameters. * Update README.md * Update README.md * Update README.md * Add f...
-
- 21 Jan, 2020 1 commit
-
-
jjsjann123 authored
-
- 08 Jan, 2020 1 commit
-
-
ptrblck authored
* add WAR for pip>=19.3.1 * remove pipmain, use extras_require instead
-
- 04 Oct, 2019 1 commit
-
-
Deyu Fu authored
* move previous fused_adam and fp16_optimizer to contrib * make build contrib.fused_adam optional * change build option name * remove unnecessary try import
-
- 13 Sep, 2019 1 commit
-
-
mcarilli authored
-
- 06 Sep, 2019 1 commit
-
-
mcarilli authored
* Pushing for build tests * Contrib files * Removing deprecated checks
-
- 17 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 16 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 13 Aug, 2019 1 commit
-
-
Marek Kolodziej authored
Co-authored-by:
Aditya Agrawal <aditya.iitb@gmail.com> Co-authored-by:
Marek Kolodziej <mkolod@gmail.com>
-
- 08 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 31 May, 2019 1 commit
-
-
Thor Johnsen authored
* First draft, for discussion * Fix mistakes in LAMB equations * Add loop over chunk * Bug fix * Bug fix * Bug fix * Undo bug fix * Bug fix * Add multi tensor LAMB optimizer to setup.py * Rename step_size to learning_rate * Fix compilation errors
-
- 23 May, 2019 1 commit
-
-
Michael Carilli authored
-