"vscode:/vscode.git/clone" did not exist on "1b5b7de5daef4fbc93934e525ab3c0e8c7d029d1"
  1. 10 Apr, 2019 3 commits
  2. 04 Apr, 2019 1 commit
    • mcarilli's avatar
      WIP: Handle arbitrary combinations of optimizers/models/losses (#232) · 3f87614f
      mcarilli authored
      * Refactor to allow more flexible treatment of multiple optimizers/models/losses
      
      * Adding _process_optimizers.py
      
      * Created L0 tests (now passing).
      
      * fix: minor print typo (#234)
      
      * make L1 results easier to read
      
      * L0 multiple model/optimizer/loss test fleshed out
      
      * Adding test that master params remain synced across distributed processes
      
      * Docstring updates
      
      * Docstring updates
      3f87614f
  3. 22 Mar, 2019 1 commit
    • mcarilli's avatar
      Check cuda version (#216) · 5b8faa29
      mcarilli authored
      * Adding Torch + bare-metal nvcc version check and container build tests
      
      * Putting a canary in the coalmine
      
      * canary proved elusive
      
      * Trying direct setup.py install
      
      * this should work
      
      * Removing canary
      
      * hopefully this works
      5b8faa29
  4. 19 Mar, 2019 1 commit
  5. 13 Mar, 2019 1 commit
  6. 12 Mar, 2019 1 commit
  7. 10 Mar, 2019 1 commit
  8. 08 Mar, 2019 3 commits
  9. 07 Mar, 2019 1 commit
  10. 02 Mar, 2019 1 commit
  11. 01 Mar, 2019 4 commits
  12. 28 Feb, 2019 1 commit
  13. 26 Feb, 2019 1 commit
  14. 24 Feb, 2019 1 commit
  15. 22 Feb, 2019 1 commit
  16. 19 Feb, 2019 1 commit
  17. 16 Feb, 2019 3 commits
  18. 13 Feb, 2019 1 commit
  19. 08 Feb, 2019 2 commits
  20. 06 Feb, 2019 1 commit
  21. 05 Feb, 2019 1 commit
    • Jerry Ma's avatar
      Better FP16 support in pytorch fp16 utils. · 713e0fb8
      Jerry Ma authored
      This commit adds an FP16Model class as a successor to network_to_half.
      
      The benefits of this class are:
      
      - Preservation of single-precision for BatchNorm layers. The models
        generated by network_to_half() convert BatchNorm moment tensors to
        half-precision, then back to single-precision, which hurts the
        accuracy of the moment estimators and occasionally results in NaNs.
      - Support for multi-argument nn.Modules (self-explanatory from code).
      713e0fb8
  22. 03 Feb, 2019 1 commit
  23. 01 Feb, 2019 1 commit
  24. 29 Jan, 2019 3 commits
  25. 28 Jan, 2019 1 commit
  26. 25 Jan, 2019 1 commit
  27. 15 Jan, 2019 1 commit
    • Jie's avatar
      [sync BN nhwc] · 443fa76e
      Jie authored
      Added kernel to support sync BN for channel last tensor
      443fa76e
  28. 15 Dec, 2018 1 commit