1. 17 Jun, 2023 1 commit
    • turneram's avatar
      Update CK commit hash and add gfx940 to supported archs (#1842) · b8898d7e
      turneram authored
      * Add initial ck_gemm code
      
      * Format
      
      * Add additional src files
      
      * Format
      
      * Add include
      
      * Simplify fuse_ck
      
      * Format
      
      * Rename var
      
      * Enable pass
      
      * Update ck version
      
      * Fix include
      
      * Add group stride
      
      * Disable warnings for ck headers
      
      * Format
      
      * Add unpack array
      
      * Add interface to enable tuning
      
      * Format
      
      * Update compile_ops to handle tuning config
      
      * Format
      
      * Add some comments
      
      * Move time_op to migraphx_gpu
      
      * Add banchmarking
      
      * Refactor
      
      * Format
      
      * Add lift class macro
      
      * Use device name
      
      * Format
      
      * Generate configs
      
      * Format
      
      * Pass tuning parameter
      
      * Move data type to is_ck_gemm matcher
      
      * Format
      
      * Add problem_cache to avoid retuning same configs
      
      * Format
      
      * Format
      
      * Mark the problems
      
      * Format
      
      * Use is_null
      
      * Format
      
      * Resize vector
      
      * Only tune with exaustive tuning
      
      * Format
      
      * Use assert
      
      * FOrmat
      
      * Tidy fixes
      
      * More tidy fixes
      
      * Format
      
      * Add license to missing files
      
      * Format
      
      * Use transform
      
      * Format
      
      * Fix tidy
      
      * Format
      
      * Fix cppcheck issues
      
      * Format
      
      * Add static_assert
      
      * Add ops header
      
      * Add assertion in batcher
      
      * Format
      
      * Improve the batch fold check
      
      * Format
      
      * Add where op workaround for CK
      
      * Skip if any input is not a supported ck type
      
      * Format
      
      * Check batch is standard
      
      * Format
      
      * Remove redundant static keyword
      
      * Update commit hash
      
      * Fix error when running without --exhaustive-tune
      
      * Formatting
      
      * Formatting
      
      * Remove fuse_ck_gemm_softmax_gemm
      
      * Update ck hash
      
      * Correct spelling mistake
      
      * Remove commented out logic from fuse_ck
      
      * Remove unused include and add comment
      
      * Formatting
      
      * Remove redundant get_shape and remove ck_gemm from names
      
      * Formatting
      
      * Allow for mixed types with int8 gemms
      
      * Formatting
      
      * Add back find_package from merge
      
      * Update CK commit hash and add gfx940 to fuse_ops supported archs
      
      * Formatting
      
      * Update CK hash
      b8898d7e
  2. 09 Jun, 2023 1 commit
  3. 08 Jun, 2023 1 commit
  4. 24 May, 2023 1 commit
  5. 06 Apr, 2023 1 commit
  6. 29 Mar, 2023 1 commit
  7. 27 Mar, 2023 1 commit
  8. 10 Mar, 2023 1 commit
  9. 16 Feb, 2023 1 commit
  10. 17 Jan, 2023 1 commit
  11. 09 Jan, 2023 1 commit
  12. 06 Dec, 2022 1 commit
  13. 02 Nov, 2022 2 commits
  14. 27 Oct, 2022 1 commit
    • kahmed10's avatar
      Add JIT pad (#1411) · 0d841ded
      kahmed10 authored
      updated GPU pad to now use JIT version.
      added range functions for JIT kernels.
      0d841ded
  15. 19 Oct, 2022 1 commit
    • Charlie Lin's avatar
      Refactor dynamic compute; Dynamic ref unary functions (#1407) · 693cb5d8
      Charlie Lin authored
      Refactor dynamic compute
      - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object
      - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes
        change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to 
        use dyn_output object
      
      Dynamic ref unary functions
      -  Included these changes to have an example of the refactored dynamic compute being used
      -  Changes to unary base class to handle dynamic shapes
      -  Changed elu and leaky_relu to use unary base class and pointwise JIT
      693cb5d8
  16. 18 Oct, 2022 1 commit
  17. 04 Oct, 2022 1 commit
  18. 29 Sep, 2022 1 commit
  19. 26 Sep, 2022 1 commit
  20. 21 Sep, 2022 1 commit
  21. 14 Sep, 2022 1 commit
  22. 08 Sep, 2022 1 commit
  23. 17 Aug, 2022 1 commit
  24. 25 Jul, 2022 1 commit
    • Ted Themistokleous's avatar
      Add onnx mod operator (#1302) · 77e80b8e
      Ted Themistokleous authored
      * Add in changes for onnx Mod operator
      
      Initial operator for mod implementation and test cases for integer and floating based types.
      
      Need to use fmod from stdlib for floating point types. half_float::half thankfully is specced to the use the existing std::fmod() call when looking at the half.hpp implementation.
      
      fmod_flag should mirror the onnx fmod attribute. Right now using a floating point type without setting that on the user side to true will result in an exception.
      
      Ref ticket #1283 
      77e80b8e
  25. 05 Jul, 2022 1 commit
  26. 03 Jul, 2022 1 commit
    • Paul Fultz II's avatar
      Add mlir fusion (#1251) · ca8a54fe
      Paul Fultz II authored
      * Add mlir c api
      
      * Formatting
      
      * Create a type attribute
      
      * Formatting
      
      * Parse module
      
      * Formatting
      
      * Add mlir dump function
      
      * Add test case
      
      * Formatting
      
      * Fix tidy issues
      
      * Update mlit version
      
      * Update to newer mlir
      
      * Format
      
      * Move mlir to the gpu and update the test
      
      * Formatting
      
      * Fix bug when appending module
      
      * Format
      
      * Remove old cmake flag
      
      * Update message
      
      * Add return
      
      * Format
      
      * Add mlir_compile
      
      * Format
      
      * Register dialect
      
      * Handle unsinged integers
      
      * Dont provide output for return instruction
      
      * Format
      
      * Add code to insert memrefs
      
      * Format
      
      * Add mlir verification
      
      * Formatting
      
      * Enable pointwise_fusion
      
      * Disable eliminate_data_type
      
      * Set kernal name
      
      * Format
      
      * Fix device name
      
      * Formatting
      
      * Fix output arg
      
      * Format
      
      * Updates
      
      * Upate hash
      
      * Add fuse_mlir pass
      
      * Format
      
      * Add fuse mlir
      
      * Format
      
      * Update mlir
      
      * Sort parameter names
      
      * Format
      
      * Reenable disabled passes
      
      * Remove old mlir conv
      
      * Remove asym default padding
      
      * Add more verbose tracing
      
      * Format
      
      * Fix compilation errors
      
      * Format
      
      * Whitelist operators
      
      * Format
      
      * Add namespace
      
      * Format
      
      * Update triple
      
      * Format
      
      * Use func dialect
      
      * Format
      
      * Use func.return
      
      * Format
      
      * Upgrade mlir version
      
      * Add comment
      
      * Handle symetrical padding
      
      * Format
      
      * Cleanup debug output
      
      * Format
      
      * List failed tests
      
      * Move mlir compile to jit pipeline
      
      * Format
      
      * Update version
      
      * Add source locations
      
      * Format
      
      * Correctly add module
      
      * Format
      
      * Update failed tests
      
      * Fix failures when mlir is disabled
      
      * Format
      
      * Update mlir version
      
      * Check type for fp32
      
      * Format
      
      * Remove failed test
      
      * Update mlir in driver
      
      * Tidy fixes
      
      * Foramt
      
      * Tidy fixes
      
      * Format
      
      * Fix const
      
      * Remove from requirements
      
      * Fix cmake version
      
      * Fix tidy warning
      
      * Use another ifdef
      
      * Fix tidy
      
      * Other tidy fix
      
      * Format
      
      * Update hash
      
      * Add missing license files
      
      * Format
      
      * Format
      
      * Fix fnction name
      ca8a54fe
  27. 25 Jun, 2022 1 commit
  28. 22 Jun, 2022 1 commit
  29. 10 Jun, 2022 1 commit
  30. 24 May, 2022 1 commit
  31. 20 May, 2022 1 commit
    • kahmed10's avatar
      Rename pointwise ops (#1145) · 4a312201
      kahmed10 authored
      For clarity on kernel names found when profiling. The new names are set to the order of the ops being compiled. For example: add + relu = add_relu_kernel.
      4a312201
  32. 09 May, 2022 1 commit
  33. 06 May, 2022 1 commit
  34. 29 Apr, 2022 1 commit
  35. 27 Apr, 2022 1 commit
    • Paul Fultz II's avatar
      Add lane reduction (#1180) · 4c72cc95
      Paul Fultz II authored
      With reductions such as {2048, 2, 1456} on axes 1, this is 23x faster than using our new block_reduce, and its even over 100x faster than our original reduce_sum:
      
      # lane
      gpu::code_object[code_object=13736,symbol_name=kernel,global=2981888,local=1024,]: 0.0672928ms
      # block
      gpu::code_object[code_object=13800,symbol_name=kernel,global=39321600,local=64,]: 1.46072ms
      # original
      gpu::reduce_sum[axes={1}]: 6.73456ms
      There is some basic logic to pick between lane and block reduce automatically.
      4c72cc95
  36. 17 Apr, 2022 1 commit
    • Paul Fultz II's avatar
      Reduce with runtime compilation (#1150) · f9a5b81e
      Paul Fultz II authored
      There is significant improvement on larger tensors with half almost 50% faster:
      
      lens: [1024, 384, 768]
      gpu::code_object[code_object=13832,symbol_name=kernel,global=39321600,local=256,]: 1.16685ms
      gpu::reduce_sum[axes={2}]: 1.73126ms
      Also for non-trivial layouts this can sometimes be over 2x faster:
      
      lens: [64, 1024, 768, 4]
      gpu::code_object[code_object=13832,symbol_name=kernel,global=39321600,local=256,]: 1.1706ms
      gpu::reduce_sum[axes={1}]: 2.63375ms
      Of course if the stride becomes larger this speed improvement diminishes due to poor memory access patterns. A lane_reduce instead of a block_reduce is needed for such type of kernels. I plan to address that in a future PR.
      
      Finally, this also includes a MIGRAPHX_GPU_DUMP_ASM env variable which will print out the assembly when the kernel compiles.
      f9a5b81e
  37. 29 Mar, 2022 1 commit
    • Paul Fultz II's avatar
      Refactor runtime compiled kernels to use the same compile_ops pipeline (#1125) · 661046c6
      Paul Fultz II authored
      This adds the infrastructure so we can compile everything in parallel, whereas before only pointwise kernels were compiled in parallel. This will also directly integrate with lowering and the gpu-driver. The kernels for pointwise and roialign are using this infrastructure. Scatternd is not since it does require standard shape.
      
      This also makes it easier to add new runtime compiled kernels in the future.
      661046c6