1. 25 Jun, 2021 1 commit
  2. 23 Feb, 2021 1 commit
  3. 21 Jan, 2021 1 commit
  4. 18 Jan, 2021 1 commit
  5. 16 Dec, 2020 1 commit
  6. 15 Dec, 2020 3 commits
  7. 10 Dec, 2020 1 commit
  8. 09 Dec, 2020 2 commits
  9. 01 Dec, 2020 1 commit
  10. 18 Aug, 2020 1 commit
  11. 17 Aug, 2020 1 commit
  12. 10 Aug, 2020 1 commit
  13. 05 Aug, 2020 1 commit
  14. 01 Aug, 2020 1 commit
  15. 10 Jul, 2020 1 commit
  16. 01 Jun, 2020 1 commit
  17. 30 May, 2020 2 commits
  18. 29 May, 2020 1 commit
  19. 20 May, 2020 1 commit
  20. 18 May, 2020 1 commit
  21. 14 May, 2020 1 commit
  22. 07 May, 2020 1 commit
  23. 28 Apr, 2020 1 commit
  24. 23 Apr, 2020 1 commit
  25. 22 Apr, 2020 1 commit
  26. 23 Mar, 2020 1 commit
  27. 20 Mar, 2020 2 commits
  28. 11 Mar, 2020 1 commit
  29. 02 Mar, 2020 1 commit
  30. 27 Feb, 2020 1 commit
  31. 25 Feb, 2020 2 commits
  32. 24 Feb, 2020 1 commit
    • Kevin Stephano's avatar
      Change to Multihead Attention to allow Batched GEMMs larger than 64K. (#728) · 1733946a
      Kevin Stephano authored
      * Adding C++ Multihead Attention implementation to contrib.
      
      * Add reference test that at least works for forward.
      
      * Remove CublasLt support from multihead attention.
      
      * Add new Python version of self attention.
      
      * Update python model of MHA with backward pass.
      
      * Fixed Output Linear connection in MHA.
      
      * Clean up compiles and add documentation to PySelfAttention.
      
      * Add Encdec Python version of multihead attention.  Cleanup files.
      
      * Tests for self and encdec multihead attention.
      
      * Add reference pytorch implementation of attention with norm and add.
      
      * Add cutlass branch definition.
      
      * Add cutlass download to compile.
      
      * Add norm/add tests.
      
      * Add biases to pytorch python versions.
      
      * Add tests and fix issues with python version of attention masking.
      
      * Create README.md
      
      * Update README.md
      
      * Update README.md
      
      * Update perf test parameters.
      
      * Update README.md
      
      * Update README.md
      
      * Update README.md
      
      * Add files via upload
      
      * Update README.md
      
      * Update README.md
      
      * Update README.md
      
      * Fix matmul1 output tensor size.  Fix tests that missed issue.
      
      * Allow for Z dimensions of 64K and greater on batched GEMMs.
      
      * remove redundant imports
      
      * general cleanup, remove deprecated or unused functions
      1733946a
  33. 15 Feb, 2020 1 commit
  34. 06 Feb, 2020 1 commit
    • Kevin Stephano's avatar
      Add Fast Multihead Attention to APEX Contrib (#697) · 3f94528e
      Kevin Stephano authored
      * Adding C++ Multihead Attention implementation to contrib.
      
      * Add reference test that at least works for forward.
      
      * Remove CublasLt support from multihead attention.
      
      * Add new Python version of self attention.
      
      * Update python model of MHA with backward pass.
      
      * Fixed Output Linear connection in MHA.
      
      * Clean up compiles and add documentation to PySelfAttention.
      
      * Add Encdec Python version of multihead attention.  Cleanup files.
      
      * Tests for self and encdec multihead attention.
      
      * Add reference pytorch implementation of attention with norm and add.
      
      * Add cutlass branch definition.
      
      * Add cutlass download to compile.
      
      * Add norm/add tests.
      
      * Add biases to pytorch python versions.
      
      * Add tests and fix issues with python version of attention masking.
      
      * Create README.md
      
      * Update README.md
      
      * Update README.md
      
      * Update perf test parameters.
      
      * Update README.md
      
      * Update README.md
      
      * Update README.md
      
      * Add f...
      3f94528e