1. 31 May, 2019 1 commit
  2. 28 May, 2019 1 commit
  3. 24 May, 2019 1 commit
  4. 23 May, 2019 1 commit
  5. 22 May, 2019 3 commits
  6. 21 May, 2019 1 commit
  7. 17 May, 2019 2 commits
    • jjsjann123's avatar
      [syncbn update] (#287) · a5289067
      jjsjann123 authored
      update input size check to fix github issue #262
      
      update SyncBatchNorm count check so that size 1 input with cross GPU
      synchronization runs fine.
      a5289067
    • jjsjann123's avatar
      [SyncBatchNorm update] (#285) · ffbb52ba
      jjsjann123 authored
      resolves issue #254
      
      Added input casting for pure python implementation, this supports mismatched
      input and layer dtype.
      ffbb52ba
  8. 16 May, 2019 1 commit
  9. 15 May, 2019 3 commits
  10. 13 May, 2019 2 commits
  11. 09 May, 2019 1 commit
  12. 30 Apr, 2019 5 commits
  13. 29 Apr, 2019 2 commits
  14. 26 Apr, 2019 1 commit
    • ptrblck's avatar
      Replace type().ScalarType() with scalar_type() (#272) · 855808f3
      ptrblck authored
      * change .type().ScalarType() to .scalar_type() + at::ScalarType::X to at::kX
      
      * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF
      
      * revert scalar_type() to type() in AT_DISPATCH_FLOATING_TYPES
      
      * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF in welford.cu
      
      * revert scalar_type() to type() in layer_norm_cuda_kernel.cu
      
      * revert at::kType  to at::ScalarType::Type
      
      * use DISPATCH_FLOAT_AND_HALF to get rid of warnings
      
      * add dispatch mechanisms for double+float and double+float+half
      855808f3
  15. 23 Apr, 2019 1 commit
  16. 18 Apr, 2019 2 commits
  17. 16 Apr, 2019 1 commit
  18. 11 Apr, 2019 1 commit
    • henrymai's avatar
      prelu belongs in FP16_CASTS (#257) · 4dc711bc
      henrymai authored
      The main use of these functions (e.g.: `torch.{conv*, prelu}`) is via their `torch.nn`
      wrapping layers.
      
      The `torch.nn` layers are what contain the weights and call into these lower level
      functions with the weights as a parameter in their `forward()` method.
      
      The `torch.conv*` functions are already in the `FP16_CASTS` list due to amp's philosophy of
      casting the parameters rather than the model/layer weights.
      
      Conceptually `torch.prelu` is the same as the `torch.conv*` case, where its weight parameter
      is passed in from its wrapper layer `torch.nn.PReLU`.
      4dc711bc
  19. 10 Apr, 2019 5 commits
  20. 09 Apr, 2019 1 commit
  21. 08 Apr, 2019 1 commit
  22. 05 Apr, 2019 3 commits