1. 02 Jan, 2019 1 commit
    • Jie's avatar
      [syncBN] · fa719e8b
      Jie authored
      replacing new_group with torch.distributed.group.WORLD, avoids creating new
      group in every iteration.
      
      This should resolve the issue in Training gets stuck when using SyncBN #105
      fa719e8b
  2. 22 Dec, 2018 1 commit
    • rxy1212's avatar
      Update __init__.py · 870c917a
      rxy1212 authored
      torch.distributed.new_group and torch.distributed.reduce_op are deprecated on pytorch 1.0.0, this fix can avoid some errors for now.
      870c917a
  3. 17 Dec, 2018 2 commits
  4. 14 Dec, 2018 1 commit
  5. 10 Dec, 2018 1 commit
  6. 03 Dec, 2018 1 commit
  7. 14 Nov, 2018 1 commit
  8. 01 Nov, 2018 2 commits
  9. 30 Oct, 2018 2 commits
  10. 29 Oct, 2018 1 commit
    • mcarilli's avatar
      Merging in fused adam optimizer, additional DDP features tested in 18.10 (#60) · e0bc5d62
      mcarilli authored
      * test passes
      
      * notes
      
      * Using C++-side flatten and unflatten functions
      
      * Adding csrc
      
      * Persistent synchronization event so it doesn't need to be created and destroyed each time
      
      * Interop with parameter flattening in SSD
      
      * Added deterministic option to imagenet main.py
      
      * Adding options to split gradient averaging and allreduce in pure fp32
      
      * Fixing allreduce_maybe_retain call
      
      * Fixing allreduce_fallback
      
      * Also sync active_i_buckets from rank 0
      
      * Making retain_allreduce_buffers compatible with/orthogonal to delay_allreduce=True|False
      
      * Correcting syntax error, now all seems to work with SSD
      
      * Optional cpp extension build
      
      * Add mixed precision adam optimizer (#59)
      
      * Add FusedAdam Optimizer to Apex that places all the math into a cuda kernel.
      
      * Added fixes to fused_adam to get it to work with network.
      
      * wip work on python interface for adam with options
      
      * fix dispatch for halfs, add python options to handle optional half gradients and params
      
      * cleanup, get rid of grid-stride loop
      e0bc5d62
  11. 23 Oct, 2018 1 commit
    • jjsjann123's avatar
      [syncBN] (#48) · 81eef1ef
      jjsjann123 authored
      * [syncBN]
        added syncBN in native pure python apex
        added fused cuda kernels used for sync BN. Using welford for mean/var
          optional installation using 'python setup.py install --cuda_ext'
        added unit test with side to side comparison between apex sync BN with
          PyTorch BN. Notice that for pytorch BN implementation, because of
          numerical issue for mean/var, the output will be slightly off.
      
      * [syncBN PR]
        added fp16 support
        addressing review comments on:
          1. updating last pow 2
          2. look for import error when importing syncBN kernel
      
      * [syncBN PR]
        added convert function to insert SyncBatchNorm
        refactored some kernel code
      
      * fixing type issue (fp16/fp32/fp64)
      added Kahan summation
      editing unit test to use pytorch primitive ops with double, passing reasonable tests now
      
      * updating tensor creation calls
      
      * fixing the all_reduce contiguous tensor
      
      * transposed all reduce results
      
      * [syncBN]
      support fp16 input & fp32 layer for apex fp16
      partially fixing launch configs
      enabling imagenet example to run with --sync_bn
      
      * [syncBN PR]
      Documentation added
      
      * adjusting README
      
      * adjusting again
      
      * added some doc to imagenet example
      
      * [syncBN]
        warp-level reduction
        bug fix: warp reduction logic updated. check for dummy element to avoid nan.
        improved launch config for better reduction kernels. Further improvements
      would be to increase grid size.
      
      * [syncBN]
        fixing undefined behavior in __shfl_down_sync from divergent threads in warp
      reduction.
        changing at::native::empty to at::empty (upstream comments)
      81eef1ef
  12. 10 Oct, 2018 2 commits
  13. 08 Oct, 2018 1 commit
  14. 03 Oct, 2018 1 commit
  15. 29 Sep, 2018 3 commits
  16. 19 Sep, 2018 1 commit
    • mcarilli's avatar
      Fix param freezing (#47) · 53e1b61a
      mcarilli authored
      * Fix appears to work in Tomasz's example.
      
      * Somehow shared_param got de-enabled again?
      53e1b61a
  17. 18 Sep, 2018 1 commit
  18. 05 Sep, 2018 2 commits
  19. 30 Aug, 2018 1 commit
  20. 28 Aug, 2018 3 commits
  21. 14 Aug, 2018 1 commit
  22. 18 Jul, 2018 2 commits
  23. 04 Jul, 2018 1 commit
  24. 03 Jul, 2018 1 commit
    • Raul Puri's avatar
      LARC clipping+documentation (#6) · 88effd5d
      Raul Puri authored
      * Proper implementation of LARC clipping
       * Documentation of LARC class
       * Modification of FP16_Optimizer to absorb optimizer instance that's being wrapped instead of creating new optimizer instance of same class.
      88effd5d
  25. 26 Jun, 2018 2 commits
  26. 22 Jun, 2018 1 commit
  27. 20 Jun, 2018 1 commit
  28. 16 Jun, 2018 1 commit
  29. 15 Jun, 2018 1 commit