1. 12 Jul, 2019 1 commit
    • jjsjann123's avatar
      [sbn update] (#384) · 574fe244
      jjsjann123 authored
      fixing empty return from python implementation
        adding proper test to verify functional correctness for python implementation
      574fe244
  2. 03 Jul, 2019 1 commit
  3. 31 May, 2019 1 commit
  4. 16 May, 2019 1 commit
  5. 13 May, 2019 1 commit
  6. 30 Apr, 2019 1 commit
  7. 10 Apr, 2019 3 commits
  8. 04 Apr, 2019 1 commit
    • mcarilli's avatar
      WIP: Handle arbitrary combinations of optimizers/models/losses (#232) · 3f87614f
      mcarilli authored
      * Refactor to allow more flexible treatment of multiple optimizers/models/losses
      
      * Adding _process_optimizers.py
      
      * Created L0 tests (now passing).
      
      * fix: minor print typo (#234)
      
      * make L1 results easier to read
      
      * L0 multiple model/optimizer/loss test fleshed out
      
      * Adding test that master params remain synced across distributed processes
      
      * Docstring updates
      
      * Docstring updates
      3f87614f
  9. 22 Mar, 2019 1 commit
    • mcarilli's avatar
      Check cuda version (#216) · 5b8faa29
      mcarilli authored
      * Adding Torch + bare-metal nvcc version check and container build tests
      
      * Putting a canary in the coalmine
      
      * canary proved elusive
      
      * Trying direct setup.py install
      
      * this should work
      
      * Removing canary
      
      * hopefully this works
      5b8faa29
  10. 19 Mar, 2019 1 commit
  11. 13 Mar, 2019 1 commit
  12. 12 Mar, 2019 1 commit
  13. 10 Mar, 2019 1 commit
  14. 08 Mar, 2019 3 commits
  15. 07 Mar, 2019 1 commit
  16. 02 Mar, 2019 1 commit
  17. 01 Mar, 2019 4 commits
  18. 28 Feb, 2019 1 commit
  19. 26 Feb, 2019 1 commit
  20. 24 Feb, 2019 1 commit
  21. 22 Feb, 2019 1 commit
  22. 19 Feb, 2019 1 commit
  23. 16 Feb, 2019 3 commits
  24. 13 Feb, 2019 1 commit
  25. 08 Feb, 2019 2 commits
  26. 06 Feb, 2019 1 commit
  27. 05 Feb, 2019 1 commit
    • Jerry Ma's avatar
      Better FP16 support in pytorch fp16 utils. · 713e0fb8
      Jerry Ma authored
      This commit adds an FP16Model class as a successor to network_to_half.
      
      The benefits of this class are:
      
      - Preservation of single-precision for BatchNorm layers. The models
        generated by network_to_half() convert BatchNorm moment tensors to
        half-precision, then back to single-precision, which hurts the
        accuracy of the moment estimators and occasionally results in NaNs.
      - Support for multi-argument nn.Modules (self-explanatory from code).
      713e0fb8
  28. 03 Feb, 2019 1 commit
  29. 01 Feb, 2019 1 commit
  30. 29 Jan, 2019 1 commit