1. 03 Oct, 2022 1 commit
    • Umang Yadav's avatar
      Add output_alias and runs_on_offload_target flags for the custom ops (#1309) · c9ffb38d
      Umang Yadav authored
      Adds two methods for the custom_ops virtual class.
      
      bool runs_on_offload_target(), if the custom op runs directly on the gpu then it should be set to true. in this case, custom op expects its parameters to reside in GPU memory and writes output to the GPU memory. If it is set to false then, custom op expects it's parameter to reside on the host and puts back the result into the host memory.
      
      output_alias, if output of the custom op is aliasing the input buffer. i.e. interpreting the same input buffer with differnet shape and strides.
      
      Update as_vector() in C++ API to handle non-standard shapes. It required exposing element_index to space_index conversion method for the shape class.
      c9ffb38d
  2. 29 Sep, 2022 2 commits
  3. 28 Sep, 2022 1 commit
    • Umang Yadav's avatar
      Add compute_fp32 flag for quant_gemm tests (#1360) · 70e63960
      Umang Yadav authored
      test_gpu_pack_int8_args fails on gfx908 machine, because it doesn't set compute_fp32 flag correctly. This PR fixes the test such that it checks for the device-name, and rocblas-versions and sets this flag accordingly.
      70e63960
  4. 27 Sep, 2022 1 commit
  5. 26 Sep, 2022 3 commits
  6. 23 Sep, 2022 1 commit
  7. 21 Sep, 2022 2 commits
  8. 19 Sep, 2022 1 commit
    • Paul Fultz II's avatar
      Improve layernorm and reductions performance (#1348) · 97a1ed2d
      Paul Fultz II authored
      Compute mean and variance in same reduction
      Set block size to numbers divisible by 32 instead powers of 2
      Global is also set exactly instead of being divisible by block size
      More exact matching of global/local can help get rid of branching/loops
      Reduce vectors first before doing dpp_reduce
      Explicitly vectorize array operators since the compiler doesnt always vectorize them
      Still uses old for loop when its computing at compile-time since the reinterpret_cast nor the all the vector types is supported
      97a1ed2d
  9. 16 Sep, 2022 2 commits
  10. 15 Sep, 2022 1 commit
  11. 14 Sep, 2022 3 commits
  12. 13 Sep, 2022 1 commit
    • turneram's avatar
      Use rocblas_gemm_ex for batched gemms with broadcasted B (#1354) · a10a8ef1
      turneram authored
      Improves performance for 4/6 GEMMs used by huggingface BERT models with batch_size>1 by using a non-batched rocBLAS call for GEMMs where the B input has a broadcasted batch dimension.
      The four verify tests added reflect the actual configurations used by bert-base-cased, with varied batch sizes.
      
      Also adds a matcher to simplify_reshapes to move multibroadcasts after concats.
      a10a8ef1
  13. 08 Sep, 2022 2 commits
  14. 07 Sep, 2022 1 commit
  15. 06 Sep, 2022 1 commit
  16. 31 Aug, 2022 1 commit
  17. 29 Aug, 2022 1 commit
  18. 27 Aug, 2022 2 commits
  19. 26 Aug, 2022 1 commit
  20. 24 Aug, 2022 1 commit
  21. 23 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref NMS (#1288) · fa3c21fa
      Charlie Lin authored
      Has NMS op output a dynamic shape (ONNX spec behavior)
      Allows for dynamic input shape to NMS op
      fa3c21fa
  22. 21 Aug, 2022 1 commit
    • varunsh's avatar
      Update is_supported (#1334) · 79e15ca9
      varunsh authored
      * Update is_supported
      * Return object from is_supported
      * Return by reference in interator
      79e15ca9
  23. 19 Aug, 2022 2 commits
  24. 18 Aug, 2022 1 commit
    • shivadbhavsar's avatar
      pybind updates for torch_migraphx library (#1323) · 8045f7c8
      shivadbhavsar authored
      Add function argument_from_pointer to allow directly passing a migraphx.shape object and a memory address. 
      Expose the is_compiled() method from migraphx::program. 
      Expose the enum types under migraphx::op. 
      8045f7c8
  25. 17 Aug, 2022 2 commits
  26. 16 Aug, 2022 2 commits
  27. 12 Aug, 2022 1 commit
  28. 08 Aug, 2022 1 commit
    • Ted Themistokleous's avatar
      Imply type of literal returned based on input protobuff for zero elem… (#1326) · bb0e04ce
      Ted Themistokleous authored
      * Imply type of literal returned based on input protobuff for zero element constant values.
      
      This saves us the default behavior as the onnx parsing assumes that every zero value is float. This way we're still grabbing relevant type information from the protobuff instead and wont fail our data type checks for if them/else blocks from onnx
      
      * Revert "Imply type of literal returned based on input protobuff for zero element constant values."
      
      This reverts commit 390bb853
      
      .
      
      * Add  test case to parse in empty constant int64 proto buffer
      
      I think the previous test case was aliasing an issue where we default to float but need to actually read in int64 instead of int32
      
      * fixup! Add  test case to parse in empty constant int64 proto buffer
      
      * Add test for non empty int64 scalar
      
      Add one item in the np array to use for the constant we're parsing in.
      
      * Draft partial fix
      
      * Fix test failures from previous change to read in protobuf data types correctly for empty constants.
      
      Instead of assuming things are empty and thus we default to float, reading in the correct types broke some assumptions code was using for an empty literal.
      
      * Fix formatting and naming
      
      * Fix naming with var in constant_one_val_int64_test
      Co-authored-by: default avatarcharlie <charlie.lin@amd.com>
      Co-authored-by: default avatarkahmed10 <15948690+kahmed10@users.noreply.github.com>
      bb0e04ce