1. 17 Jun, 2023 2 commits
    • turneram's avatar
      Update CK commit hash and add gfx940 to supported archs (#1842) · b8898d7e
      turneram authored
      * Add initial ck_gemm code
      
      * Format
      
      * Add additional src files
      
      * Format
      
      * Add include
      
      * Simplify fuse_ck
      
      * Format
      
      * Rename var
      
      * Enable pass
      
      * Update ck version
      
      * Fix include
      
      * Add group stride
      
      * Disable warnings for ck headers
      
      * Format
      
      * Add unpack array
      
      * Add interface to enable tuning
      
      * Format
      
      * Update compile_ops to handle tuning config
      
      * Format
      
      * Add some comments
      
      * Move time_op to migraphx_gpu
      
      * Add banchmarking
      
      * Refactor
      
      * Format
      
      * Add lift class macro
      
      * Use device name
      
      * Format
      
      * Generate configs
      
      * Format
      
      * Pass tuning parameter
      
      * Move data type to is_ck_gemm matcher
      
      * Format
      
      * Add problem_cache to avoid retuning same configs
      
      * Format
      
      * Format
      
      * Mark the problems
      
      * Format
      
      * Use is_null
      
      * Format
      
      * Resize vector
      
      * Only tune with exaustive tuning
      
      * Format
      
      * Use assert
      
      * FOrmat
      
      * Tidy fixes
      
      * More tidy fixes
      
      * Format
      
      * Add license to missing files
      
      * Format
      
      * Use transform
      
      * Format
      
      * Fix tidy
      
      * Format
      
      * Fix cppcheck issues
      
      * Format
      
      * Add static_assert
      
      * Add ops header
      
      * Add assertion in batcher
      
      * Format
      
      * Improve the batch fold check
      
      * Format
      
      * Add where op workaround for CK
      
      * Skip if any input is not a supported ck type
      
      * Format
      
      * Check batch is standard
      
      * Format
      
      * Remove redundant static keyword
      
      * Update commit hash
      
      * Fix error when running without --exhaustive-tune
      
      * Formatting
      
      * Formatting
      
      * Remove fuse_ck_gemm_softmax_gemm
      
      * Update ck hash
      
      * Correct spelling mistake
      
      * Remove commented out logic from fuse_ck
      
      * Remove unused include and add comment
      
      * Formatting
      
      * Remove redundant get_shape and remove ck_gemm from names
      
      * Formatting
      
      * Allow for mixed types with int8 gemms
      
      * Formatting
      
      * Add back find_package from merge
      
      * Update CK commit hash and add gfx940 to fuse_ops supported archs
      
      * Formatting
      
      * Update CK hash
      b8898d7e
    • Umang Yadav's avatar
      Fix convert operation for NaNs (#1840) · 2d635f91
      Umang Yadav authored
      * Fix convert for the NaNs
      
      * NaNs can't be compared, use std::isnan()
      
      * formatting
      
      * formatting
      
      * formatting
      
      * add extra tests
      2d635f91
  2. 16 Jun, 2023 1 commit
  3. 15 Jun, 2023 2 commits
    • Umang Yadav's avatar
      use __hmax, __hmin (#1813) · d208adfc
      Umang Yadav authored
      d208adfc
    • Brian Pickrell's avatar
      fix parse_instancenorm to create broadcast and multibroadcast instruc… (#1715) · 41ba30d5
      Brian Pickrell authored
      * fix parse_instancenorm to create broadcast and multibroadcast instructions with two dynamic shape arguments instead of 1.  Their make_op() functions don't support dynamic shapes when called with one input.  This caused an error when parsing an ONNX 3duunet model
      
      * Use add_common_op() to create multibroadcast op.
      
      * add verification and parsing test for instance_norm with dynamic input.  Parse test doesn't pass.
      
      * fix for test; still doesn't pass
      
      * another fix for test; still doesn't pass
      
      * work in progress, instance_norm_dyn_batch_test works but instance_norm_test doesn't
      
      * fix onnx instancenorm tests to match parser changes.  Passes all check tests
      
      * Updated comments explaining usage of add_common_op()
      
      * hand-merged conflicts with develop
      
      * fix instance_norm_half_test after merge
      
      * add Onnx test instance_norm_dyn_batch_half_test
      
      * add shape test cases broadcast_1in_dyn_error and multibroadcast_1in_dyn_error_0
      41ba30d5
  4. 14 Jun, 2023 2 commits
  5. 12 Jun, 2023 1 commit
  6. 09 Jun, 2023 3 commits
  7. 08 Jun, 2023 2 commits
  8. 06 Jun, 2023 2 commits
  9. 05 Jun, 2023 1 commit
  10. 01 Jun, 2023 1 commit
  11. 31 May, 2023 1 commit
  12. 30 May, 2023 2 commits
  13. 28 May, 2023 1 commit
  14. 25 May, 2023 1 commit
  15. 24 May, 2023 2 commits
  16. 23 May, 2023 1 commit
  17. 20 May, 2023 1 commit
  18. 19 May, 2023 1 commit
  19. 17 May, 2023 2 commits
  20. 08 May, 2023 1 commit
  21. 06 May, 2023 1 commit
  22. 05 May, 2023 3 commits
  23. 04 May, 2023 2 commits
    • Paul Fultz II's avatar
      Rewrite multiplies with dot operator (#1685) · 457703a8
      Paul Fultz II authored
      When multiplying either the input or output across the K dimensions then the multiple can be applied to the constant which can then be folded with propagate_const.
      457703a8
    • Zhuoran Yin's avatar
      [mlir] Adding quant convolution fusion as anchor op (#1683) · 7f105952
      Zhuoran Yin authored
      Exposed the mlir_enabled() call the decide for lowering pipeline's enablement
      Disabled the rewrite quantization pipeline in mlir compilation
      Added quant convolution as anchor ops
      Fixed the return type expectations
      Added the fall back hip implementation for quantizelinear and dequantizelinear
      Will need advises to improve the implementation for quantizelinear
      7f105952
  24. 03 May, 2023 1 commit
    • Charlie Lin's avatar
      Update C/C++ API for dynamic batch (#1712) · 0ff00ef6
      Charlie Lin authored
      Relies on Removed split_single_dyn_dim compile flag #1711
      Exposes dynamic_dimension as a opaque object with dynamic_dimensions and optimals
      Exposes ONNX dyn_input_dims and default_dyn_dim to run with dynamic batch
      Updates api.py to be able to create objects from aggregate initialization (used for dynamic_dimension)
      Uses offload copy for now
      0ff00ef6
  25. 02 May, 2023 1 commit
  26. 28 Apr, 2023 1 commit
  27. 25 Apr, 2023 1 commit