1. 02 Nov, 2022 1 commit
  2. 28 Oct, 2022 1 commit
  3. 27 Oct, 2022 2 commits
  4. 26 Oct, 2022 1 commit
  5. 24 Oct, 2022 1 commit
  6. 19 Oct, 2022 2 commits
  7. 18 Oct, 2022 1 commit
  8. 13 Oct, 2022 1 commit
  9. 04 Oct, 2022 2 commits
  10. 03 Oct, 2022 1 commit
    • Umang Yadav's avatar
      Add output_alias and runs_on_offload_target flags for the custom ops (#1309) · c9ffb38d
      Umang Yadav authored
      Adds two methods for the custom_ops virtual class.
      
      bool runs_on_offload_target(), if the custom op runs directly on the gpu then it should be set to true. in this case, custom op expects its parameters to reside in GPU memory and writes output to the GPU memory. If it is set to false then, custom op expects it's parameter to reside on the host and puts back the result into the host memory.
      
      output_alias, if output of the custom op is aliasing the input buffer. i.e. interpreting the same input buffer with differnet shape and strides.
      
      Update as_vector() in C++ API to handle non-standard shapes. It required exposing element_index to space_index conversion method for the shape class.
      c9ffb38d
  11. 29 Sep, 2022 1 commit
  12. 28 Sep, 2022 1 commit
    • Umang Yadav's avatar
      Add compute_fp32 flag for quant_gemm tests (#1360) · 70e63960
      Umang Yadav authored
      test_gpu_pack_int8_args fails on gfx908 machine, because it doesn't set compute_fp32 flag correctly. This PR fixes the test such that it checks for the device-name, and rocblas-versions and sets this flag accordingly.
      70e63960
  13. 27 Sep, 2022 1 commit
  14. 26 Sep, 2022 1 commit
  15. 23 Sep, 2022 1 commit
  16. 21 Sep, 2022 1 commit
  17. 19 Sep, 2022 1 commit
    • Paul Fultz II's avatar
      Improve layernorm and reductions performance (#1348) · 97a1ed2d
      Paul Fultz II authored
      Compute mean and variance in same reduction
      Set block size to numbers divisible by 32 instead powers of 2
      Global is also set exactly instead of being divisible by block size
      More exact matching of global/local can help get rid of branching/loops
      Reduce vectors first before doing dpp_reduce
      Explicitly vectorize array operators since the compiler doesnt always vectorize them
      Still uses old for loop when its computing at compile-time since the reinterpret_cast nor the all the vector types is supported
      97a1ed2d
  18. 16 Sep, 2022 1 commit
  19. 15 Sep, 2022 1 commit
  20. 14 Sep, 2022 1 commit
  21. 13 Sep, 2022 1 commit
    • turneram's avatar
      Use rocblas_gemm_ex for batched gemms with broadcasted B (#1354) · a10a8ef1
      turneram authored
      Improves performance for 4/6 GEMMs used by huggingface BERT models with batch_size>1 by using a non-batched rocBLAS call for GEMMs where the B input has a broadcasted batch dimension.
      The four verify tests added reflect the actual configurations used by bert-base-cased, with varied batch sizes.
      
      Also adds a matcher to simplify_reshapes to move multibroadcasts after concats.
      a10a8ef1
  22. 08 Sep, 2022 1 commit
  23. 07 Sep, 2022 1 commit
  24. 06 Sep, 2022 1 commit
  25. 31 Aug, 2022 1 commit
  26. 27 Aug, 2022 2 commits
  27. 17 Aug, 2022 1 commit
  28. 16 Aug, 2022 1 commit
  29. 12 Aug, 2022 1 commit
  30. 02 Aug, 2022 1 commit
  31. 29 Jul, 2022 1 commit
    • Umang Yadav's avatar
      Avoid registering host buffer ptr multiple times during hip copies (#1245) · 7596f3f1
      Umang Yadav authored
      Currently, while copying a host buffer to the device, it first registers/maps the host buffer pointer to address space of the device.
      
      If the host buffer has been allocated by the hipHostMalloc then, it is implicitly registered to the device's address space, and no need to register again. This PR adds a check for the same.
      7596f3f1
  32. 25 Jul, 2022 1 commit
    • Ted Themistokleous's avatar
      Add onnx mod operator (#1302) · 77e80b8e
      Ted Themistokleous authored
      * Add in changes for onnx Mod operator
      
      Initial operator for mod implementation and test cases for integer and floating based types.
      
      Need to use fmod from stdlib for floating point types. half_float::half thankfully is specced to the use the existing std::fmod() call when looking at the half.hpp implementation.
      
      fmod_flag should mirror the onnx fmod attribute. Right now using a floating point type without setting that on the user side to true will result in an exception.
      
      Ref ticket #1283 
      77e80b8e
  33. 19 Jul, 2022 1 commit
    • Charlie Lin's avatar
      Fix op includes (#1308) · 39b307b2
      Charlie Lin authored
      Changes to operator includes:
      
      removed some includes that were not used
      included argument.hpp where clang-tidy wanted it
      39b307b2
  34. 12 Jul, 2022 1 commit
  35. 11 Jul, 2022 2 commits