1. 08 Sep, 2022 1 commit
    • Hubert Lu's avatar
      Enable --transducer extension for ROCm (#88) · ae5ca671
      Hubert Lu authored
      * Enable --transducer extension for ROCm
      
      * Enable --transducer unit tests for ROCm
      
      * Skip some failing tests in test_transducer_joint.py
      
      * Skip test_transducer_joint_pack for transducer extension
      
      * Keep transducer extension CUDA-compatible
      ae5ca671
  2. 07 Sep, 2022 1 commit
  3. 23 Aug, 2022 2 commits
  4. 22 Aug, 2022 1 commit
  5. 07 Jul, 2022 1 commit
  6. 31 May, 2022 1 commit
  7. 21 Apr, 2022 1 commit
  8. 19 Apr, 2022 1 commit
  9. 15 Apr, 2022 1 commit
    • Hubert Lu's avatar
      Apex transformer (#77) · 27a47345
      Hubert Lu authored
      * Add setup_simple.py for debugging the compiling issue of scaled_masked_softmax_cuda
      
      * Comment out CUDA-specific implementations
      
      * Resolve filename collision of *cpp files with to-hipify code and *cu files
      27a47345
  10. 13 Apr, 2022 1 commit
    • Hubert Lu's avatar
      Cherry-picked the commit from upstream for faster --fast_multihead_attn build (#76) · 29b36315
      Hubert Lu authored
      
      
      * Faster `--fast_multihead_attn` build (#1245)
      
      * merge .so files
      
      * odr
      
      * fix build
      
      * update import
      
      * apply psf/black with max line length of 120
      
      * update
      
      * fix
      
      * update
      
      * build fixed again but undefined symbol again
      
      * fix 2, still layer norm grad is undefined
      
      * remove unused cpp files
      
      * without layer_norm.cuh, import works
      
      * import fast_multihead_attn works...
      
      but why? Was unnecessary `#include "layer_norm.cuh"` was the culprit
      causing .shared objects not to be able to link `HostApplyLayerNorm` and
      `HostLayerNormGradient`?
      
      * clean up layer norm
      
      * Fix some bugs
      Co-authored-by: default avatarMasaki Kozuki <mkozuki@nvidia.com>
      29b36315
  11. 06 Apr, 2022 1 commit
    • Hubert Lu's avatar
      Make rocblas_gemm_flags_fp16_alt_impl in MHA and MLP backward compatible with... · 5ecad142
      Hubert Lu authored
      Make rocblas_gemm_flags_fp16_alt_impl in MHA and MLP backward compatible with old PyTorch versions (#74)
      
      * First attempt to make rocblas flag backward compatible
      
      * Fix some bugs
      
      * Fix some bugs
      
      * Make rocblas_gemm_flags_fp16_alt_impl in MHA backward compatible with old PyTorch versions
      
      * Add groupbn extension unit tests for ROCm
      
      * Fix some bugs
      5ecad142
  12. 05 Apr, 2022 2 commits
  13. 30 Mar, 2022 1 commit
  14. 25 Mar, 2022 1 commit
  15. 24 Mar, 2022 1 commit
    • Masaki Kozuki's avatar
      Add CUDA Focal Loss Implementation (#1337) · 28f8539c
      Masaki Kozuki authored
      
      
      Take-over of #1097
      
      * Add fast CUDA focal loss implementation.
      
      * Enable fast math for CUDA focal loss.
      
      * Correct typo.
      
      * replace deprecated macros
      
      * Add fast CUDA focal loss implementation.
      
      * Enable fast math for CUDA focal loss.
      
      * Correct typo.
      
      * replace deprecated macros
      
      * TORCH_CUDA_CHECK -> AT_CUDA_CHECK
      
      The former is defined in torch/csrc/profiler/cuda.cpp so it's not available usually.
      The latter however is defined in ATen/cuda/Exceptions.h as an alias of C10_CUDA_CHECK.
      
      * add test
      
      * clean up
      
      * guard for torchvision
      Co-authored-by: default avatarWil Kong <alpha0422@gmail.com>
      28f8539c
  16. 23 Mar, 2022 1 commit
  17. 11 Mar, 2022 1 commit
  18. 27 Feb, 2022 1 commit
  19. 26 Feb, 2022 1 commit
  20. 10 Feb, 2022 1 commit
  21. 01 Feb, 2022 1 commit
    • ChongyuNVIDIA's avatar
      Add the permutation related support as the extension for asp lib. (#1194) · 89edb819
      ChongyuNVIDIA authored
      * Add the permutation related support as the extension for asp lib.
      
      * [Fix] Track the permutation sequence for progressive channel swap strategy.
      
      * Fix the corner case that one layer is not sparse, but need to apply permutation due to its siblings.
      
      * Fix the deprecated functions in ASP unit tests.
      
      * Fix the sparsity info typo in ASP lib.
      
      * [Enhancement] Set the identical random seed for all GPUs to make sure the same results generated in permutation search.
      
      * Update the README.md with identical random seed setting and NeurIPS info.
      
      * Integrate the Pybind11 enhancement of permutation search into ASP lib.
      89edb819
  22. 28 Jan, 2022 1 commit
  23. 19 Jan, 2022 1 commit
  24. 13 Jan, 2022 1 commit
  25. 16 Dec, 2021 1 commit
  26. 15 Dec, 2021 1 commit
  27. 14 Dec, 2021 1 commit
    • Masaki Kozuki's avatar
      Faster `--fast_multihead_attn` build (#1245) · 7ec8ed67
      Masaki Kozuki authored
      * merge .so files
      
      * odr
      
      * fix build
      
      * update import
      
      * apply psf/black with max line length of 120
      
      * update
      
      * fix
      
      * update
      
      * build fixed again but undefined symbol again
      
      * fix 2, still layer norm grad is undefined
      
      * remove unused cpp files
      
      * without layer_norm.cuh, import works
      
      * import fast_multihead_attn works...
      
      but why? Was unnecessary `#include "layer_norm.cuh"` was the culprit
      causing .shared objects not to be able to link `HostApplyLayerNorm` and
      `HostLayerNormGradient`?
      
      * clean up layer norm
      7ec8ed67
  28. 09 Dec, 2021 2 commits
  29. 03 Dec, 2021 1 commit
  30. 02 Dec, 2021 1 commit
  31. 02 Nov, 2021 1 commit
  32. 27 Oct, 2021 1 commit
  33. 21 Oct, 2021 1 commit
  34. 19 Oct, 2021 2 commits
  35. 02 Oct, 2021 1 commit
  36. 08 Sep, 2021 1 commit
    • Masaki Kozuki's avatar
      enable ninja (#1164) · 9ce0a10f
      Masaki Kozuki authored
      - passing include directories to `CUDAExtension`'s `include_dirs` argument
      - removing `-I/path/to/dir` arguments from `extra_compile_args`
      9ce0a10f