• mcarilli's avatar
    Merging in fused adam optimizer, additional DDP features tested in 18.10 (#60) · e0bc5d62
    mcarilli authored
    * test passes
    
    * notes
    
    * Using C++-side flatten and unflatten functions
    
    * Adding csrc
    
    * Persistent synchronization event so it doesn't need to be created and destroyed each time
    
    * Interop with parameter flattening in SSD
    
    * Added deterministic option to imagenet main.py
    
    * Adding options to split gradient averaging and allreduce in pure fp32
    
    * Fixing allreduce_maybe_retain call
    
    * Fixing allreduce_fallback
    
    * Also sync active_i_buckets from rank 0
    
    * Making retain_allreduce_buffers compatible with/orthogonal to delay_allreduce=True|False
    
    * Correcting syntax error, now all seems to work with SSD
    
    * Optional cpp extension build
    
    * Add mixed precision adam optimizer (#59)
    
    * Add FusedAdam Optimizer to Apex that places all the math into a cuda kernel.
    
    * Added fixes to fused_adam to get it to work with network.
    
    * wip work on python interface for adam with options
    
    * fix dispatch for halfs, add python options to handle optional half gradients and params
    
    * cleanup, get rid of grid-stride loop
    e0bc5d62
ddp_race_condition_test.py 2.27 KB