1. 21 Sep, 2022 1 commit
  2. 19 Sep, 2022 1 commit
    • Hubert Lu's avatar
      Faster build (#95) · 89f5722c
      Hubert Lu authored
      * Remove redundant import's and enable ninja for MHA extension
      
      * Remove redundant CUDAExtension import's
      89f5722c
  3. 08 Sep, 2022 1 commit
    • Hubert Lu's avatar
      Enable --transducer extension for ROCm (#88) · ae5ca671
      Hubert Lu authored
      * Enable --transducer extension for ROCm
      
      * Enable --transducer unit tests for ROCm
      
      * Skip some failing tests in test_transducer_joint.py
      
      * Skip test_transducer_joint_pack for transducer extension
      
      * Keep transducer extension CUDA-compatible
      ae5ca671
  4. 07 Sep, 2022 1 commit
  5. 23 Aug, 2022 2 commits
  6. 22 Aug, 2022 1 commit
  7. 07 Jul, 2022 1 commit
  8. 31 May, 2022 1 commit
  9. 21 Apr, 2022 1 commit
  10. 19 Apr, 2022 1 commit
  11. 15 Apr, 2022 1 commit
    • Hubert Lu's avatar
      Apex transformer (#77) · 27a47345
      Hubert Lu authored
      * Add setup_simple.py for debugging the compiling issue of scaled_masked_softmax_cuda
      
      * Comment out CUDA-specific implementations
      
      * Resolve filename collision of *cpp files with to-hipify code and *cu files
      27a47345
  12. 13 Apr, 2022 1 commit
    • Hubert Lu's avatar
      Cherry-picked the commit from upstream for faster --fast_multihead_attn build (#76) · 29b36315
      Hubert Lu authored
      
      
      * Faster `--fast_multihead_attn` build (#1245)
      
      * merge .so files
      
      * odr
      
      * fix build
      
      * update import
      
      * apply psf/black with max line length of 120
      
      * update
      
      * fix
      
      * update
      
      * build fixed again but undefined symbol again
      
      * fix 2, still layer norm grad is undefined
      
      * remove unused cpp files
      
      * without layer_norm.cuh, import works
      
      * import fast_multihead_attn works...
      
      but why? Was unnecessary `#include "layer_norm.cuh"` was the culprit
      causing .shared objects not to be able to link `HostApplyLayerNorm` and
      `HostLayerNormGradient`?
      
      * clean up layer norm
      
      * Fix some bugs
      Co-authored-by: default avatarMasaki Kozuki <mkozuki@nvidia.com>
      29b36315
  13. 06 Apr, 2022 1 commit
    • Hubert Lu's avatar
      Make rocblas_gemm_flags_fp16_alt_impl in MHA and MLP backward compatible with... · 5ecad142
      Hubert Lu authored
      Make rocblas_gemm_flags_fp16_alt_impl in MHA and MLP backward compatible with old PyTorch versions (#74)
      
      * First attempt to make rocblas flag backward compatible
      
      * Fix some bugs
      
      * Fix some bugs
      
      * Make rocblas_gemm_flags_fp16_alt_impl in MHA backward compatible with old PyTorch versions
      
      * Add groupbn extension unit tests for ROCm
      
      * Fix some bugs
      5ecad142
  14. 05 Apr, 2022 2 commits
  15. 30 Mar, 2022 1 commit
  16. 25 Mar, 2022 1 commit
  17. 24 Mar, 2022 1 commit
    • Masaki Kozuki's avatar
      Add CUDA Focal Loss Implementation (#1337) · 28f8539c
      Masaki Kozuki authored
      
      
      Take-over of #1097
      
      * Add fast CUDA focal loss implementation.
      
      * Enable fast math for CUDA focal loss.
      
      * Correct typo.
      
      * replace deprecated macros
      
      * Add fast CUDA focal loss implementation.
      
      * Enable fast math for CUDA focal loss.
      
      * Correct typo.
      
      * replace deprecated macros
      
      * TORCH_CUDA_CHECK -> AT_CUDA_CHECK
      
      The former is defined in torch/csrc/profiler/cuda.cpp so it's not available usually.
      The latter however is defined in ATen/cuda/Exceptions.h as an alias of C10_CUDA_CHECK.
      
      * add test
      
      * clean up
      
      * guard for torchvision
      Co-authored-by: default avatarWil Kong <alpha0422@gmail.com>
      28f8539c
  18. 23 Mar, 2022 1 commit
  19. 11 Mar, 2022 1 commit
  20. 27 Feb, 2022 1 commit
  21. 26 Feb, 2022 1 commit
  22. 10 Feb, 2022 1 commit
  23. 01 Feb, 2022 1 commit
    • ChongyuNVIDIA's avatar
      Add the permutation related support as the extension for asp lib. (#1194) · 89edb819
      ChongyuNVIDIA authored
      * Add the permutation related support as the extension for asp lib.
      
      * [Fix] Track the permutation sequence for progressive channel swap strategy.
      
      * Fix the corner case that one layer is not sparse, but need to apply permutation due to its siblings.
      
      * Fix the deprecated functions in ASP unit tests.
      
      * Fix the sparsity info typo in ASP lib.
      
      * [Enhancement] Set the identical random seed for all GPUs to make sure the same results generated in permutation search.
      
      * Update the README.md with identical random seed setting and NeurIPS info.
      
      * Integrate the Pybind11 enhancement of permutation search into ASP lib.
      89edb819
  24. 28 Jan, 2022 1 commit
  25. 19 Jan, 2022 1 commit
  26. 13 Jan, 2022 1 commit
  27. 16 Dec, 2021 1 commit
  28. 15 Dec, 2021 1 commit
  29. 14 Dec, 2021 1 commit
    • Masaki Kozuki's avatar
      Faster `--fast_multihead_attn` build (#1245) · 7ec8ed67
      Masaki Kozuki authored
      * merge .so files
      
      * odr
      
      * fix build
      
      * update import
      
      * apply psf/black with max line length of 120
      
      * update
      
      * fix
      
      * update
      
      * build fixed again but undefined symbol again
      
      * fix 2, still layer norm grad is undefined
      
      * remove unused cpp files
      
      * without layer_norm.cuh, import works
      
      * import fast_multihead_attn works...
      
      but why? Was unnecessary `#include "layer_norm.cuh"` was the culprit
      causing .shared objects not to be able to link `HostApplyLayerNorm` and
      `HostLayerNormGradient`?
      
      * clean up layer norm
      7ec8ed67
  30. 09 Dec, 2021 2 commits
  31. 03 Dec, 2021 1 commit
  32. 02 Dec, 2021 1 commit
  33. 02 Nov, 2021 1 commit
  34. 27 Oct, 2021 1 commit
  35. 21 Oct, 2021 1 commit
  36. 19 Oct, 2021 2 commits