1. 20 Dec, 2022 1 commit
  2. 10 Dec, 2022 1 commit
  3. 09 Dec, 2022 2 commits
  4. 06 Dec, 2022 2 commits
  5. 21 Sep, 2022 1 commit
  6. 19 Sep, 2022 1 commit
    • Hubert Lu's avatar
      Faster build (#95) · 89f5722c
      Hubert Lu authored
      * Remove redundant import's and enable ninja for MHA extension
      
      * Remove redundant CUDAExtension import's
      89f5722c
  7. 08 Sep, 2022 4 commits
  8. 07 Sep, 2022 2 commits
  9. 26 Aug, 2022 1 commit
  10. 23 Aug, 2022 2 commits
  11. 22 Aug, 2022 2 commits
  12. 15 Aug, 2022 1 commit
  13. 10 Aug, 2022 1 commit
  14. 09 Aug, 2022 7 commits
  15. 08 Aug, 2022 6 commits
  16. 05 Aug, 2022 1 commit
    • Hubert Lu's avatar
      Enable FusedRMSNorm (#78) · c97ebfab
      Hubert Lu authored
      
      
      * FusedRMSNorm/"T5LayerNorm" based on FusedLayerNorm (#1274)
      
      * FusedRMSNorm based on FusedLayerNorm
      
      * refactor duplicated kernels
      
      * delete comments
      
      * delete comments
      
      * cleanup
      
      * cleanup
      
      * cleanup, fixed clobbering forward_affine_mixed_dtypes
      
      * fix pybind naming and add MixedFused test
      
      * undo skipping
      
      * check elementwise_affine
      
      * Update tests/L0/run_fused_layer_norm/test_fused_layer_norm.py
      
      Oof, nice catch, thanks
      Co-authored-by: default avatarMasaki Kozuki <masaki.kozuki.2014@gmail.com>
      Co-authored-by: default avatarMasaki Kozuki <masaki.kozuki.2014@gmail.com>
      
      * fix and generate docs for FusedRMSNorm (#1285)
      
      * [FusedRMSNorm doc] document where epsilon is added (#1295)
      
      * [FusedRMSNorm doc] add epsilon to formula
      
      * correct
      
      * better wording
      
      * Fix some bugs
      
      * Optimize HostRMSNormGradient and HostApplyRMSNorm for AMD GPUs
      
      * Fix NaN issues in FusedRMSNorm
      
      * Update test_fused_layer_norm.py
      
      * Skip test_fused_layer_norm.TestAutocastFusedRMSNorm on ROCm
      
      * Use at::cuda::warp_size() instead of at::cuda::getCurrentDeviceProperties()->warpSize
      Co-authored-by: default avatareqy <eddiey@nvidia.com>
      Co-authored-by: default avatarMasaki Kozuki <masaki.kozuki.2014@gmail.com>
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      c97ebfab
  17. 29 Jul, 2022 5 commits