1. 05 Aug, 2022 1 commit
    • Hubert Lu's avatar
      Enable FusedRMSNorm (#78) · c97ebfab
      Hubert Lu authored
      
      
      * FusedRMSNorm/"T5LayerNorm" based on FusedLayerNorm (#1274)
      
      * FusedRMSNorm based on FusedLayerNorm
      
      * refactor duplicated kernels
      
      * delete comments
      
      * delete comments
      
      * cleanup
      
      * cleanup
      
      * cleanup, fixed clobbering forward_affine_mixed_dtypes
      
      * fix pybind naming and add MixedFused test
      
      * undo skipping
      
      * check elementwise_affine
      
      * Update tests/L0/run_fused_layer_norm/test_fused_layer_norm.py
      
      Oof, nice catch, thanks
      Co-authored-by: default avatarMasaki Kozuki <masaki.kozuki.2014@gmail.com>
      Co-authored-by: default avatarMasaki Kozuki <masaki.kozuki.2014@gmail.com>
      
      * fix and generate docs for FusedRMSNorm (#1285)
      
      * [FusedRMSNorm doc] document where epsilon is added (#1295)
      
      * [FusedRMSNorm doc] add epsilon to formula
      
      * correct
      
      * better wording
      
      * Fix some bugs
      
      * Optimize HostRMSNormGradient and HostApplyRMSNorm for AMD GPUs
      
      * Fix NaN issues in FusedRMSNorm
      
      * Update test_fused_layer_norm.py
      
      * Skip test_fused_layer_norm.TestAutocastFusedRMSNorm on ROCm
      
      * Use at::cuda::warp_size() instead of at::cuda::getCurrentDeviceProperties()->warpSize
      Co-authored-by: default avatareqy <eddiey@nvidia.com>
      Co-authored-by: default avatarMasaki Kozuki <masaki.kozuki.2014@gmail.com>
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      c97ebfab
  2. 07 Jul, 2022 1 commit
  3. 15 Apr, 2022 1 commit
  4. 07 Feb, 2022 1 commit
  5. 10 Oct, 2019 1 commit
  6. 27 Aug, 2019 2 commits
    • ptrblck's avatar
      Enable Checkpointing (#420) · dec4fdd6
      ptrblck authored
      * add state_dict, load_state_dict
      
      * add test_restoring, test_loss_scale_decrease
      
      * disable amp outputs for checkpoint tests
      
      * add test for amp.state_dict, cleanup
      
      * add state_dict patch, add test
      
      * fixed testing, cleanup
      
      * add readme for checkpointing
      
      * add docs to source/amp
      
      * add review changes to doc
      dec4fdd6
    • Michael Carilli's avatar
      Updating docstrings for fused optimizers · 427e82cd
      Michael Carilli authored
      427e82cd
  7. 27 Jun, 2019 1 commit
  8. 24 Jun, 2019 2 commits
  9. 04 Apr, 2019 1 commit
    • mcarilli's avatar
      WIP: Handle arbitrary combinations of optimizers/models/losses (#232) · 3f87614f
      mcarilli authored
      * Refactor to allow more flexible treatment of multiple optimizers/models/losses
      
      * Adding _process_optimizers.py
      
      * Created L0 tests (now passing).
      
      * fix: minor print typo (#234)
      
      * make L1 results easier to read
      
      * L0 multiple model/optimizer/loss test fleshed out
      
      * Adding test that master params remain synced across distributed processes
      
      * Docstring updates
      
      * Docstring updates
      3f87614f
  10. 20 Mar, 2019 1 commit
  11. 13 Mar, 2019 1 commit
  12. 12 Mar, 2019 2 commits
  13. 11 Mar, 2019 1 commit
  14. 07 Mar, 2019 4 commits
  15. 06 Mar, 2019 1 commit
  16. 05 Mar, 2019 1 commit
  17. 28 Feb, 2019 1 commit
  18. 06 Feb, 2019 1 commit
  19. 01 Feb, 2019 1 commit
  20. 12 Dec, 2018 1 commit
  21. 28 Nov, 2018 2 commits
  22. 30 Oct, 2018 1 commit
  23. 23 Oct, 2018 1 commit
    • jjsjann123's avatar
      [syncBN] (#48) · 81eef1ef
      jjsjann123 authored
      * [syncBN]
        added syncBN in native pure python apex
        added fused cuda kernels used for sync BN. Using welford for mean/var
          optional installation using 'python setup.py install --cuda_ext'
        added unit test with side to side comparison between apex sync BN with
          PyTorch BN. Notice that for pytorch BN implementation, because of
          numerical issue for mean/var, the output will be slightly off.
      
      * [syncBN PR]
        added fp16 support
        addressing review comments on:
          1. updating last pow 2
          2. look for import error when importing syncBN kernel
      
      * [syncBN PR]
        added convert function to insert SyncBatchNorm
        refactored some kernel code
      
      * fixing type issue (fp16/fp32/fp64)
      added Kahan summation
      editing unit test to use pytorch primitive ops with double, passing reasonable tests now
      
      * updating tensor creation calls
      
      * fixing the all_reduce contiguous tensor
      
      * transposed all reduce results
      
      * [syncBN]
      support fp16 input & fp32 layer for apex fp16
      partially fixing launch configs
      enabling imagenet example to run with --sync_bn
      
      * [syncBN PR]
      Documentation added
      
      * adjusting README
      
      * adjusting again
      
      * added some doc to imagenet example
      
      * [syncBN]
        warp-level reduction
        bug fix: warp reduction logic updated. check for dummy element to avoid nan.
        improved launch config for better reduction kernels. Further improvements
      would be to increase grid size.
      
      * [syncBN]
        fixing undefined behavior in __shfl_down_sync from divergent threads in warp
      reduction.
        changing at::native::empty to at::empty (upstream comments)
      81eef1ef
  24. 28 Aug, 2018 1 commit
  25. 20 Jun, 2018 1 commit
  26. 16 Jun, 2018 2 commits
  27. 15 Jun, 2018 2 commits
  28. 14 Jun, 2018 1 commit
  29. 08 May, 2018 1 commit
  30. 25 Apr, 2018 2 commits