1. 06 Jul, 2020 1 commit
    • jjsjann123's avatar
      [sync BN] (#792) · 1ff54b8f
      jjsjann123 authored
      * [sync BN]
      
      support non-uniform batch size across process group.
      
      TODO: test should be added once cleaned up.
      
      * updating unit tests
      
      * new unit tests for different inputs
      
      * cleaning
      1ff54b8f
  2. 06 Sep, 2019 1 commit
    • mcarilli's avatar
      Fix for #456 (#477) · 325f5a0b
      mcarilli authored
      * Pushing for build tests
      
      * Contrib files
      
      * Removing deprecated checks
      325f5a0b
  3. 03 Jul, 2019 2 commits
  4. 27 Apr, 2019 1 commit
    • jjsjann123's avatar
      Bnp integration pr (#275) · fedfe0d7
      jjsjann123 authored
      * Persistent group batchnorm added
      
      Added persistent grouped batch norm for performance run on strong scaling case:
      currently only supporting:
      
        1. nhwc layout
        2. fp16
        3. synchronization only within a node!
      
      Environment variable is used to tune LAUNCH_MARGIN that limits the CTAs usage
      by the persistent kernel.
      
      Documentation and examples will follow.
      
      * updating type().scalarType() to scalar_type()
      
      * moving launch margin to be defined at layer creation, adding a knob cap max ctas per sm
      
      * fixing the cta computation
      
      * review comment:
      
      set device_id through cudaGetDevice()
      move cudaMemset to cudaMemsetAsync
      updated __threadfence() to __threadfence_system() inter device write
      fedfe0d7
  5. 26 Apr, 2019 1 commit
    • ptrblck's avatar
      Replace type().ScalarType() with scalar_type() (#272) · 855808f3
      ptrblck authored
      * change .type().ScalarType() to .scalar_type() + at::ScalarType::X to at::kX
      
      * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF
      
      * revert scalar_type() to type() in AT_DISPATCH_FLOATING_TYPES
      
      * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF in welford.cu
      
      * revert scalar_type() to type() in layer_norm_cuda_kernel.cu
      
      * revert at::kType  to at::ScalarType::Type
      
      * use DISPATCH_FLOAT_AND_HALF to get rid of warnings
      
      * add dispatch mechanisms for double+float and double+float+half
      855808f3
  6. 15 Mar, 2019 1 commit
  7. 12 Mar, 2019 1 commit
  8. 03 Mar, 2019 1 commit
  9. 01 Feb, 2019 1 commit
  10. 18 Jan, 2019 1 commit
  11. 15 Jan, 2019 1 commit
    • Jie's avatar
      [sync BN nhwc] · 443fa76e
      Jie authored
      Added kernel to support sync BN for channel last tensor
      443fa76e
  12. 06 Nov, 2018 1 commit
    • Jie's avatar
      [syncBN] · ee67e56a
      Jie authored
      adjusted kernel config for better perf.
      removed divergence in welford warp reduction.
      ee67e56a
  13. 23 Oct, 2018 1 commit
    • jjsjann123's avatar
      [syncBN] (#48) · 81eef1ef
      jjsjann123 authored
      * [syncBN]
        added syncBN in native pure python apex
        added fused cuda kernels used for sync BN. Using welford for mean/var
          optional installation using 'python setup.py install --cuda_ext'
        added unit test with side to side comparison between apex sync BN with
          PyTorch BN. Notice that for pytorch BN implementation, because of
          numerical issue for mean/var, the output will be slightly off.
      
      * [syncBN PR]
        added fp16 support
        addressing review comments on:
          1. updating last pow 2
          2. look for import error when importing syncBN kernel
      
      * [syncBN PR]
        added convert function to insert SyncBatchNorm
        refactored some kernel code
      
      * fixing type issue (fp16/fp32/fp64)
      added Kahan summation
      editing unit test to use pytorch primitive ops with double, passing reasonable tests now
      
      * updating tensor creation calls
      
      * fixing the all_reduce contiguous tensor
      
      * transposed all reduce results
      
      * [syncBN]
      support fp16 input & fp32 layer for apex fp16
      partially fixing launch configs
      enabling imagenet example to run with --sync_bn
      
      * [syncBN PR]
      Documentation added
      
      * adjusting README
      
      * adjusting again
      
      * added some doc to imagenet example
      
      * [syncBN]
        warp-level reduction
        bug fix: warp reduction logic updated. check for dummy element to avoid nan.
        improved launch config for better reduction kernels. Further improvements
      would be to increase grid size.
      
      * [syncBN]
        fixing undefined behavior in __shfl_down_sync from divergent threads in warp
      reduction.
        changing at::native::empty to at::empty (upstream comments)
      81eef1ef