1. 17 Dec, 2018 3 commits
  2. 14 Dec, 2018 1 commit
  3. 12 Dec, 2018 1 commit
  4. 11 Dec, 2018 5 commits
  5. 10 Dec, 2018 2 commits
  6. 07 Dec, 2018 1 commit
  7. 06 Dec, 2018 1 commit
  8. 04 Dec, 2018 2 commits
  9. 03 Dec, 2018 2 commits
    • mcarilli's avatar
      [syncBN] (#77) · 0273d7ad
      mcarilli authored
      adjusted kernel config for better perf.
      removed divergence in welford warp reduction.
      0273d7ad
    • jjsjann123's avatar
      [syncBN] (#90) · 5dad4c21
      jjsjann123 authored
      supporting user specified process group
      5dad4c21
  10. 02 Dec, 2018 1 commit
  11. 30 Nov, 2018 2 commits
  12. 28 Nov, 2018 3 commits
  13. 14 Nov, 2018 1 commit
  14. 10 Nov, 2018 1 commit
  15. 06 Nov, 2018 1 commit
    • Jie's avatar
      [syncBN] · ee67e56a
      Jie authored
      adjusted kernel config for better perf.
      removed divergence in welford warp reduction.
      ee67e56a
  16. 01 Nov, 2018 4 commits
  17. 31 Oct, 2018 1 commit
  18. 30 Oct, 2018 7 commits
  19. 29 Oct, 2018 1 commit
    • mcarilli's avatar
      Merging in fused adam optimizer, additional DDP features tested in 18.10 (#60) · e0bc5d62
      mcarilli authored
      * test passes
      
      * notes
      
      * Using C++-side flatten and unflatten functions
      
      * Adding csrc
      
      * Persistent synchronization event so it doesn't need to be created and destroyed each time
      
      * Interop with parameter flattening in SSD
      
      * Added deterministic option to imagenet main.py
      
      * Adding options to split gradient averaging and allreduce in pure fp32
      
      * Fixing allreduce_maybe_retain call
      
      * Fixing allreduce_fallback
      
      * Also sync active_i_buckets from rank 0
      
      * Making retain_allreduce_buffers compatible with/orthogonal to delay_allreduce=True|False
      
      * Correcting syntax error, now all seems to work with SSD
      
      * Optional cpp extension build
      
      * Add mixed precision adam optimizer (#59)
      
      * Add FusedAdam Optimizer to Apex that places all the math into a cuda kernel.
      
      * Added fixes to fused_adam to get it to work with network.
      
      * wip work on python interface for adam with options
      
      * fix dispatch for halfs, add python options to handle optional half gradients and params
      
      * cleanup, get rid of grid-stride loop
      e0bc5d62