1. 07 Jan, 2025 1 commit
    • Po Yen Chen's avatar
      [CK_TILE] fmha fwd splitkv optimization for decode (seqlen_q=1) (#1789) · 24b12d04
      Po Yen Chen authored
      
      
      * Update license year
      
      * Add initial code to override decode problem
      
      * Fix splitkv traits/args overriding error
      
      * Reshape and transpose lse for decode
      
      * Remove debug code
      
      * Prettify example code
      
      * Use better function name
      
      * Add kMergeNumHeadGroupsSeqLenQ flag
      
      Kernel user can use this switch to turn on/off optimization for
      some problem sizes
      
      * Add missing flag declarations
      
      * Default turn off kMergeNumHeadGroupsSeqLenQ in codegen
      
      * Group similar statements together
      
      * Remove assumption of seqlen_q=1
      
      * Remove kMergeNumHeadGroupsSeqLenQ from splitkv combine kernel
      
      * Support kMergeNumHeadGroupsSeqLenQ=true in fmha splitkv kernel
      
      * Run kMergeNumHeadGroupsSeqLenQ=true kernels when need
      
      * Fix group mode block skip logics
      
      * Undo changes of normal fwd kernel
      
      * Update in GridSize() and using GridSize() for splitkv kernel (#1799)
      
      ---------
      Co-authored-by: default avatarQianfeng <qianfeng.zhang@amd.com>
      24b12d04
  2. 04 Jan, 2025 2 commits
  3. 03 Jan, 2025 3 commits
  4. 02 Jan, 2025 2 commits
  5. 29 Dec, 2024 1 commit
    • Qianfeng's avatar
      Remove using partitioner for all fmha kernels (#1778) · 4e076909
      Qianfeng authored
      * Remove using tile partitioner for fmha_fwd_kernel
      
      * Remove using tile partitioner for fmha_fwd_splitkv and splitkv-combine kernels
      
      * Remove using tile partitioner for fmha_fwd_appendkv kernel
      
      * Unify the format of GetTileIndex
      4e076909
  6. 28 Dec, 2024 1 commit
  7. 23 Dec, 2024 1 commit
  8. 20 Dec, 2024 3 commits
    • Illia Silin's avatar
      fix typo for CK_USE_OCP_FP8 (#1769) · 07339c73
      Illia Silin authored
      07339c73
    • carlushuang's avatar
      hot-fix (#1768) · 1c45ca35
      carlushuang authored
      1c45ca35
    • Po Yen Chen's avatar
      [CK_TILE] Add fmha fwd N-Warp S-Shuffle pipeline (fmha fwd splitkv pipeline variant) (#1705) · 37cdbf4f
      Po Yen Chen authored
      
      
      * Add check for zero values
      
      * Add static assertions
      
      * Remove invalid option '-e' in smoke_test.sh
      
      * Use correct path of smoke_test.sh
      
      * Avoid zero-sized shared memory array
      
      * Add warning comment
      
      * Replace expr by integer_divide_ceil() call
      
      * Use more readable constant names
      
      * Write down assumption as static assertion
      
      * Add more diagnostic error messages
      
      * Fix wrong BlockWarps when using default pipeline policy
      
      * Add more static assertions for A LDS desc
      
      * Allow using vector size < 8 for data type fp16/bf16
      
      * Align vector size between DRAM dist & LDS desc
      
      * Remove no-longer used func decl
      
      * Fix wrong displayed piepline name
      
      * Undo policy template changes for tile_example_gemm_basic
      
      * Add missing space and make error message stands out
      
      * Unify print precision
      
      * Add missing include directive <iomanip>
      
      * Replace constant 64 by get_warp_size() call
      
      * Replace constant 128 by named variable: BankLength
      
      * Add kAMBlock/kBNBlock attributes
      
      * Allow usig different A/B warp dist for multiple blocks
      
      * Add helper function to get warp dist encodings
      
      * Add 4x64x4 fp16 warp gemm attribute impl
      
      * Complete the A/B warp dist encoding logic
      
      * Fix wrong thread mapping for C matrix
      
      * Use smaller vector size for small tile
      
      * Add static assert to block unsupported warp gemm impl
      
      * Extract common code out as helper method
      
      * Add 4x64x16 fp16 warp gemm type alias
      
      * Add comment to warning developers
      
      * Undo WarpGemmAtrributeMfma<> changes
      
      * Use more clear static assertion error message
      
      * Add trivial wrapper to get warp dstr encodings
      
      * Only transpose warp gemm result if it's square
      
      * Fix compilation error
      
      * Support multi-block warp gemm (on N direction)
      
      * Remove duplicated code
      
      * Fix output encoding of warp gemm
      
      * Fix wrong shape of WarpGemmAtrributeMfmaIterateK<>
      
      * Remove unused code
      
      * Fix wrong shape of WarpGemmAttributeMfmaImplF16F16F32M4N64K4
      
      * Add type config for bf16_t
      
      * Add 4x64x16 bf16 warp gemm
      
      * Update WarpGemmAtrributeMfmaIterateKAndTransposedCDistribution
      
      * Add 64x4x4 fp16/bf16 warp gemm impl
      
      * Add 64x4x16 fp16/bf16 warp gemm
      
      * Add static assertion for better error diagnostic
      
      * Get Q dram dstr directly form block gemm
      
      * Add missing header: fused_moe.hpp
      
      * Allow specifying different warp-gemm for gemm0 & gemm1
      
      * Store P matrix into LDS before gemm1
      
      * Fix inconsistant kernel name
      
      * Remove constraint on gemm0 & gemm1 block warps
      
      * Remove unsupported vector size from checking list
      
      * Allow using 4x64x16 warp gemm for gemm0
      
      * Finish policy customization
      
      * Finish pipeline modification
      F#
      
      * Use block warps in codegen
      
      * Fix wrong rank of m_lds_window origin
      
      * Use better distributed tensor
      
      * Make P-store earlier
      
      * Remove duplicated experssions
      
      * Remove unnecessary tile window
      
      * Create new files for new splitkv pipeline
      
      * Separate old/new pipeline codegen logic
      
      * Sync changes form develop
      
      * Undo gemm kernel/pipeline changes
      
      * Undo gemm example changes
      
      * Remove blank lines
      
      * Fix typo
      
      * Use new warp gemm interface
      
      * Fix link error
      
      * Fix wrong pipeline tag
      
      * Fix more link error
      
      * Avoid unnecessary padding
      
      * Always use vector load for K
      
      * Padding on fastest dimension when necessary
      
      * Force padding Q on hdim_q
      
      * Set high dimension padding flag to false
      
      * Re-format headers
      
      * Use warps=<1, 4, 1> for both gemm0 & gemm1
      
      * Fix complilation errors
      
      * Remove m/l shuffle logics
      
      * Ignore duplicate data when write lse_acc
      
      * Use gemm0 block warps as lds tile width
      
      * Remove hard-coded numbers
      
      * Fix wrong distribution width
      
      * Remove unnecessary code
      
      * Add s_barrier before writing to LDS
      
      * Store Q into LDS before gemm0
      
      * Fix wrong Q tile size
      
      * Use simple Q lds descriptor for debuging
      
      * Use more realistic Q lds descriptor
      
      * Add comment & use better variable name
      
      * Make Q lds space not overlapped with others
      
      * Remove unnecessary block_tile_reduce_sync() call
      
      * Move Q load statements
      
      * Move block_sync_lds() right before use
      
      * Re-order instructions
      
      * Remove necessary lambda expression
      
      * Use 8 threads on kMaxSplits direction while doing reduction
      
      * Tiny correction for using 8 threads on kMaxSplits direction for combine kernel
      
      * Padding num_split direction of o_acc tile window to 4x
      
      * Update splitkv combine pipeline design
      
      * Add kN1 back to splitkv combine pipeline problem
      
      * Fix compilation errors
      
      * Add missing template parameter
      
      * Fix wrong splitkv combine kernel name
      
      * Fix wrong origin
      
      * Fix wrong LDS descriptor shape
      
      * Fix sync & reduction logics
      
      * Remove unnecessary static assertions
      
      * Extract tile size computation logics
      
      * Make sure we can reuse padding flags in combine kernels
      
      * Rename variables
      
      * Use OaccDataType in BlockFmhaSplitKVCombinePipelineTileSizes<>
      
      * Remove unnecessary static assertion
      
      * Fix function name typo
      
      * Add constraint on kN1 template parameter
      
      * Hide K tile loading latency in earlier iteration
      
      * Fix wrong splitkv kernel name
      
      * Use s_shuffling to replace p_shuffling which removes the needs of cross-warp reduction
      
      * Rename pipeline
      
      * Fix wrong pipeline name attribute
      
      * Add GetAlignmentQ() for NWarpSShuffle pipeline
      
      * Separate Q tile into dram tile & register tile concepts
      
      * Remove non-squre warp gemm transpose c type alias
      
      * Fallback tile size changes for fmha fwd splitkv
      
      * Remove redundant change
      
      * Refine naming for the S tile
      
      * Use better naming of the S tile dstr (read from lds)
      
      * Share Q lds with K lds
      
      * Tiny change
      
      * Fix with using static_for for passing CI checking
      
      ---------
      Co-authored-by: default avatarQianfeng Zhang <Qianfeng.Zhang@amd.com>
      37cdbf4f
  9. 19 Dec, 2024 1 commit
  10. 18 Dec, 2024 3 commits
    • aledudek's avatar
      [CK TILE] Refactor GemmKernel to be reused by other GEMM related operators (#1730) · 453ca373
      aledudek authored
      * Gemm Kernel Refactor part1
      
      * Gemm Kernel Refactor common gemm pipeline part2
      
      * [CK TILE] Refactor batched gemm to reuse GemmKernel
      
      * [CK TILE] Refactor GemmKernel - review changes part1
      
      * [CK TILE] Refactor GemmKernel - references fix
      
      * [CK TILE] Refactor GemmKernel - naming changes, add problem
      
      * [CK_TILE] Refactor GemmKernel - update tests
      
      * [CK_TILE] Refactor GemmKernel - review changes
      
      * [CK_TILE] Refactor GemmKernel - update test
      
      * [CK_TILE] Refactor GemmKernel - constness fixes
      
      * [CK_TILE] Refactor GemmKernel - update tests
      453ca373
    • Xiaodong Wang's avatar
      Disambiguate bit_cast (#1749) · 1c1b3363
      Xiaodong Wang authored
      
      
      Adding namespace to disambiguate with std::bit_cast
      Co-authored-by: default avatarPo Yen Chen <PoYen.Chen@amd.com>
      1c1b3363
    • aledudek's avatar
      [CK_TILE] Move hipmalloc/memcpy calls out of gpu reference gemm (#1743) · f6c4d614
      aledudek authored
      * [CK_TILE] Move hipmalloc/memcpy calls out of gpu reference gemm
      
      * [CK_TILE] Move hipmalloc/memcpy calls out of gpu reference gemm - review changes
      
      * [CK_TILE] Move hipmalloc/memcpy calls out of gpu reference gemm - review fix
      f6c4d614
  11. 17 Dec, 2024 3 commits
  12. 15 Dec, 2024 1 commit
  13. 13 Dec, 2024 2 commits
    • Bartłomiej Kocot's avatar
      Add SplitK support into Batched GEMM V3 (#1729) · 4d8fce33
      Bartłomiej Kocot authored
      
      
      * add bmm api
      
      * add bf16 multi_d
      
      * add ckProfiler for bf16
      
      * add ckProfiler files
      
      * add more instance; fixed 64bit index issue
      
      * fixed naming
      
      * enabled batched Ds
      
      * use long_index for ds offsets
      
      * clean
      
      * add bmm fp8 ckProfiler
      
      * Update example/24_batched_gemm/batched_gemm_xdl_bf16_v3.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update example/24_batched_gemm/batched_gemm_xdl_fp8_rowwise_v3.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update example/24_batched_gemm/run_batched_gemm_example_rowwise.inc
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn.hpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn_mem_v1_default_instance.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn_mem_v2_default_instance.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update profiler/src/profile_gemm_universal_batched.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update profiler/include/profiler/profile_gemm_universal_batched_impl.hpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * clean
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn_comp_default_instance.cpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * refactor batch offset func
      
      * add splitk suppport into bmm_v3
      
      * clean
      
      * clean
      
      * format
      
      * fixed
      
      * fix
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizhan@fb.com>
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      4d8fce33
    • chenjun's avatar
      Ck tile/smoothquant out stride (#1742) · 4e731776
      chenjun authored
      * add ck_tile/smoothquant out stride parameter
      
      * Remove the default stride value
      
      ---------
      
      Co-authored-by: so <a.com>
      4e731776
  14. 12 Dec, 2024 1 commit
    • carlushuang's avatar
      [CK_TILE] naive attn (#1708) · 77a38e02
      carlushuang authored
      * add reference attention fwd
      
      * refactor addresser
      
      * update
      
      * paged, and i8 reflect-quant
      
      * lets call it forward-quant
      
      * fix error in decode variation
      
      * update naive-attn
      
      * fix page table
      
      * fix build err
      77a38e02
  15. 10 Dec, 2024 1 commit
  16. 06 Dec, 2024 2 commits
  17. 05 Dec, 2024 1 commit
  18. 04 Dec, 2024 2 commits
  19. 03 Dec, 2024 2 commits
    • Bartłomiej Kocot's avatar
      Add basic documentation structure (#1715) · 5affda81
      Bartłomiej Kocot authored
      * Add basic documentation structure
      
      * Add terminology placeholder
      
      * Add codegen placeholder
      
      * Create template for each page
      5affda81
    • Illia Silin's avatar
      OCP FP8 support for gfx12. (#1710) · 08d5c02c
      Illia Silin authored
      * (2/5) bilinear gemm pass, perf bug: skip a lds has lower performance than skip b lds
      
      * (3/5) batched gemm pass, perf bug: skip a lds has lower performance than skip b lds
      
      * (4/5) grouped conv pass
      
      * (5/5) attention pass, todo: debug lds perf bug
      
      * AIT Attention API refactor (#8)
      
      * sanity pass
      
      * sanity pass 2
      
      * confirm significant performance regression.
      
      * turn on all instances
      
      * turn off instance format
      
      * Fix bug & tunning & format
      
      * DML meta, self_attn+cross_attn
      
      * sanity pass
      
      * remove useless flag
      
      * update tile and problem size used in AIT attention
      
      * bug fix in grouped conv supporting check
      
      * deprecate inline asm wmma
      
      * Bug fix: double lds skip
      
      * clang-format
      
      * Fix errors in
      1. example, fmha
      2. gridwise pipeline
      3. deviceop, fmha, change some containers from vector to array
      
      * part2 of previous commit
      
      * clang format
      
      * API fix of gridwisegemmpipeline
      
      * separate array base and vector base attention tensor transformation
      
      * fix gemm
      
      * clang format
      
      * add gemm fp16 instances
      
      * Temp save
      
      * fpAintB kernel compile pass
      
      * Sanity pass.
      
      * Temp save
      
      * debug code enabled
      
      * Fp16AInt8B_GEMM sanity
      
      * MQA implementation
      
      * GQA-4 example
      
      * tempsave
      
      * Compile pass
      
      * New implementation of fp16Aint8B Gemm, Acheieve similar math throughput with native fp16 Gemm
      
      * Bump rocm-docs-core from 0.24.0 to 0.29.0 in /docs/sphinx
      
      Bumps [rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) from 0.24.0 to 0.29.0.
      - [Release notes](https://github.com/RadeonOpenCompute/rocm-docs-core/releases)
      - [Changelog](https://github.com/RadeonOpenCompute/rocm-docs-core/blob/develop/CHANGELOG.md)
      - [Commits](https://github.com/RadeonOpenCompute/rocm-docs-core/compare/v0.24.0...v0.29.0
      
      )
      
      ---
      updated-dependencies:
      - dependency-name: rocm-docs-core
        dependency-type: direct:production
        update-type: version-update:semver-minor
      ...
      Signed-off-by: default avatardependabot[bot] <support@github.com>
      
      * initial enablement of gfx950
      
      * fix clang format
      
      * disable examples 31 and 41 int8 on gfx950
      
      * initial navi4x enablement
      
      * remove extra endif
      
      * enabled dl_gemm
      
      * update s_barrier and s_waitcnt for gfx12
      
      * fix the gfx12 assembly syntax
      
      * fixed block_sync_lds
      
      * add support for more dl kernels on navi4
      
      * add wmma
      
      * format
      
      * Todo: fix gemm_bilinear_wmma instances compilation bug
      
      * Solve a bug when K1=16
      
      * remove unnecessary changes
      
      * Remove tensor layout limitation to LDS usage in tesnor contraction
      
      * fixed block_sync_lds
      
      * merge navi3_ref
      
      * update self-attention and cross-attention
      
      * fix a typo of name
      
      * fixed layout
      
      * debugging
      
      * Add arch limiter for fp8 gemm
      
      * fixed wmma
      
      * enable fp8 gemm_xdl for all gfx9 targets
      
      * temporarily disable gemm_xdl_fp16_fp8 on MI100/200
      
      * fix the cmake logic for gemm_xdl_fp16_fp8
      
      * fixed c_output
      
      * re-enable the gemm_xdl_fp16_fp8 on MI100/200
      
      * fixed gfx12
      
      * fixed
      
      * fixed
      
      * seperate gfx12 blockwise_gemm
      
      * fixed
      
      * enable fwd conv on navi4x
      
      * enable gridwise
      
      * enabled gemm
      
      * fixed merge
      
      * remove empty example fold
      
      * fixed conflicts
      
      * some small changes
      
      * Update cmake-ck-dev.sh
      
      * Update cmake-ck-dev.sh
      
      * enabled other types
      
      * fixed register loads
      
      * test fa
      
      * enable gfx12
      
      * clean up
      
      * enable some instances on gfx12
      
      * add gfx1201 macro in amd_wmma header
      
      * fix clang format
      
      * enable batched_gemm_softmax_gemm_perm_wmma for gfx12
      
      * disable instances with blocksize=256 in attention examples
      
      * debuggging
      
      * debug
      
      * fixed lds_enabled
      
      * debugging
      
      * Fix and add limit to skiplds feature
      
      * Enable skipLds feature and fix compilation bugs
      
      * add ck_tile definitions for gfx12
      
      * fix clang format and test/wmma_op
      
      * updage instances cmake for gfx12
      
      * disable the test_wmma_op on gfx12
      
      * fix the builds for gfx950
      
      * add gfx12 and gfx950 to default target list
      
      * clean-up cmake file
      
      * Initial introduction of OFP8 data types.
      
      * Renamed FP8 and BF8 tests into FP8_FNUZ and BF8_FNUZ.
      
      * Implementation of ConvertFP32Nearest in test_fp8_ocp.
      
      * Remove dependence on possibly undeclared alias.
      
      * Implement FP8OCP test for stochastic rounding mode.
      
      * Implement FP8OCP tests for half_t type conversions.
      
      * enable bf16 atomic add on gfx950
      
      * Implement ConvertFP32Nearest test.
      
      * Implement ConvertFP32Stochastic test.
      
      * Implement ConvertFP16Nearest and ConvertFP16Stochastic tests.
      
      * Refactoring. Move FP8 definitions into a separate header file.
      
      * Enable easy switching between architectures.
      
      * Fix compilation error for gfx942 architecture.
      
      * only builf gfx950 branch for gfx950 target by default
      
      * Enable OCP build of example_gemm_xdl_fp8.
      
      * Fix formatting.
      
      * fix the build logic for gfx950
      
      * Improve GEMM example verbosity.
      
      * Add constexpr where applicable.
      
      * fix the logic of enabling XDL and WMMA instances
      
      * Improve GEMM example verbosity.
      
      * Enable build of example_gemm_xdl_fp8_bf8 test.
      
      * Fix tests for gfx1101 architecture.
      
      * Build DPP examples only on gfx103 and gfx11 architectures.
      
      * Optionaly run either CPU or GPU verifications with GEMM examples.
      
      * Extend GeneratorTensor_Sequential to produce values of prescribed data types.
      
      * Add missing constructor.
      
      * Improve infrastructure for OFP8 data type support.
      
      * BUGFIX. Should not use FP8 as Compute/Accum data type.
      
      * Add custom target for grouped_convnd_bwd_weight tests.
      
      * Can build `tests` target on gfx950.
      
      * Bugfixes on gfx1101 architecture.
      
      * Fix dependencies.
      
      * Provide single point of truth for FP8 INF and NAN checks
      
      * Prevent instantiation of operators that are not supported by FP8 data types
      
      * Add FP8 type selection into client_axample CMakeLists.txt
      
      * Prevent sccache server from shutting down during build
      
      * Fix test success reporting logic
      
      * Change default verification method to CPU.
      
      GPU verification takes too much time to complete on the emulator.
      
      * Make sure all tests and examples are built for gfx950
      
      * Facilitate testing of FP8 data types on the emulator
      
      * Introduce two new tensor generators
      
      * Enable instances built for gfx94 to be built on gfx950
      
      * Verify 35_splitk_gemm on floating point numbers.
      
      splitk gemm appears to be losing precision VS reference implementation when FP numbers are involved.
      
      * Verify 04_gemm_add_add_fastgelu on floating point numbers
      
      * Verify 20_grouped_conv_bwd_weight on floating point numbers
      
      * Verify 38_grouped_conv_bwd_data_multiple_d on floating point numbers
      
      * Verify more tests on floating point data
      
      * Fix data types and improve testing verbocity.
      
      * Upgrade to NPI 573 build docker.
      
      * Skip on gemm_universal tests.
      
      The tests take too long to complete on the emulator.
      Need to see if it is possible to reduce the scope of the testing to just FP8 data types.
      
      * Fix gfx1101 build
      
      * Document test availability
      
      * Re-enable fp8 gemms for gfx94/95
      
      * Cherry-pick GEMM Universal tests for FP8 data types
      
      * Cleanup
      
      * CK_USE_GFX94 has already been set on this branch
      
      * Address formatting issues and leftovers
      
      * Make fail/pass logic consistent within 01_gemm folder
      
      Removed multiple negations in fail/pass logic to propagate `true` as the success indicator.
      
      * Fix GPU verification reporting logic.
      
      * Update year in copyright notice.
      
      * Cleanup
      
      * Use `enum class` instead of `enum`
      
      * Remove set_property for FP8 tests
      
      * Narrowing the scope of PR to OCP FP8 enablement only
      
      * Add tests for OCP FP8 vector_type storage
      
      * Enable gemm kernel on all gfx9 architectures (#227)
      
      * clean-up
      
      * Implement `non_native_vector_base` with `ext_vector_type` array. (#232)
      
      * Enable support of 1, 2, 4, and 8-byte custom types in CK.
      
      * Fix pool tests for OCP FP8 data type
      
      * fix jenkins file
      
      * restore cron trigger
      
      ---------
      Signed-off-by: default avatardependabot[bot] <support@github.com>
      Co-authored-by: default avataraska-0096 <haocwang@amd.com>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      Co-authored-by: default avatarJing Zhang <jizhan@amd.com>
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      Co-authored-by: default avatarJun Liu <Liu.Jun@amd.com>
      Co-authored-by: default avatarAndriy Roshchenko <andriy.roshchenko@amd.com>
      Co-authored-by: default avatarAndriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
      08d5c02c
  20. 02 Dec, 2024 1 commit
  21. 30 Nov, 2024 1 commit
  22. 29 Nov, 2024 1 commit
    • aledudek's avatar
      Ck tile batched gemm example (#1615) · 78f0fea0
      aledudek authored
      * [CK Tile] Batched GEMM Example
      
      * [CK Tile] Batched GEMM Example - minor refactor
      
      * [CK Tile] Batched GEMM Example - README update
      
      * [CK Tile] Batched Gemm Example - review changes
      
      - Added tensor data layours as input parameters
      - Changed structure of Host and Kernel args
      - Removed bug with invalid vector read on non-contiguous memory
      
      * [CK Tile] Batched Gemm Example - remove comment
      
      * [CK Tile] Batched Gemm Example - Add GTests part1
      
      * [CK Tile] Batched Gemm Example - GTests part2 + review changes
      
      * [CK TILE] Batched GEMM post merge fixes
      
      * [CK Tile] Batched GEMM Example - fix pad views
      78f0fea0
  23. 28 Nov, 2024 1 commit
  24. 27 Nov, 2024 3 commits