1. 13 May, 2020 2 commits
  2. 12 May, 2020 3 commits
  3. 11 May, 2020 1 commit
  4. 09 May, 2020 1 commit
  5. 08 May, 2020 2 commits
  6. 07 May, 2020 3 commits
  7. 28 Apr, 2020 1 commit
  8. 23 Apr, 2020 1 commit
  9. 22 Apr, 2020 2 commits
    • Deyu Fu's avatar
    • Vinicius Reis's avatar
      Fix LARC with mixed precision (#793) · 2ec84ebd
      Vinicius Reis authored
      The LARC optimizer wraps an underlying optimizer and then needs to be passed
      to amp.initialize for mixed precision. There were 3 different crashes happening
      in this situation, fix all of them and add a unit test.
      
      I don't know if the 'LARC' in sys.modules check ever worked. In my setup, the
      entry in sys.modules is 'apex.parallel.LARC'. Checking if the variable is
      defined seems more reliable though.
      2ec84ebd
  10. 20 Apr, 2020 2 commits
  11. 13 Apr, 2020 1 commit
  12. 05 Apr, 2020 2 commits
  13. 03 Apr, 2020 4 commits
  14. 02 Apr, 2020 1 commit
  15. 01 Apr, 2020 2 commits
  16. 31 Mar, 2020 2 commits
  17. 25 Mar, 2020 1 commit
    • msbaines's avatar
      Fix contrib fused_adam to work correctly with multi-GPU (#752) · 8fac3a72
      msbaines authored
      
      
      The cuda kernel used by fused-adam was using the default stream
      on the default device. The kernel needs use the same device as
      the parameter tensor.
      
      Fixed by using context manager to set correct default device. For
      the use_mt case, raised an error. Alternatively, the use_mt
      case could launch one kernel per cuda device.
      
      The non-contrib version will also need to be fixed.
      Co-authored-by: default avatarMandeep Singh Baines <msb@fb.com>
      8fac3a72
  18. 23 Mar, 2020 2 commits
  19. 21 Mar, 2020 2 commits
  20. 20 Mar, 2020 3 commits
  21. 17 Mar, 2020 2 commits