1. 18 Dec, 2024 1 commit
  2. 17 Dec, 2024 6 commits
  3. 16 Dec, 2024 5 commits
  4. 15 Dec, 2024 1 commit
  5. 14 Dec, 2024 2 commits
  6. 13 Dec, 2024 2 commits
    • Bartłomiej Kocot's avatar
      Add SplitK support into Batched GEMM V3 (#1729) · 4d8fce33
      Bartłomiej Kocot authored
      
      
      * add bmm api
      
      * add bf16 multi_d
      
      * add ckProfiler for bf16
      
      * add ckProfiler files
      
      * add more instance; fixed 64bit index issue
      
      * fixed naming
      
      * enabled batched Ds
      
      * use long_index for ds offsets
      
      * clean
      
      * add bmm fp8 ckProfiler
      
      * Update example/24_batched_gemm/batched_gemm_xdl_bf16_v3.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update example/24_batched_gemm/batched_gemm_xdl_fp8_rowwise_v3.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update example/24_batched_gemm/run_batched_gemm_example_rowwise.inc
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn.hpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn_mem_v1_default_instance.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn_mem_v2_default_instance.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update profiler/src/profile_gemm_universal_batched.cpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * Update profiler/include/profiler/profile_gemm_universal_batched_impl.hpp
      Co-authored-by: default avatarBartłomiej Kocot <bartlomiejkocot98@gmail.com>
      
      * clean
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update library/src/tensor_operation_instance/gpu/gemm_universal_batched/device_batched_gemm_xdl_universal_bf16_bf16_bf16/device_batched_gemm_xdl_universal_bf16_bf16_bf16_mk_nk_mn_comp_default_instance.cpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * Update include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_multiple_d_xdl_cshuffle_v3.hpp
      
      * refactor batch offset func
      
      * add splitk suppport into bmm_v3
      
      * clean
      
      * clean
      
      * format
      
      * fixed
      
      * fix
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizhan@fb.com>
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      4d8fce33
    • chenjun's avatar
      Ck tile/smoothquant out stride (#1742) · 4e731776
      chenjun authored
      * add ck_tile/smoothquant out stride parameter
      
      * Remove the default stride value
      
      ---------
      
      Co-authored-by: so <a.com>
      4e731776
  7. 12 Dec, 2024 1 commit
    • carlushuang's avatar
      [CK_TILE] naive attn (#1708) · 77a38e02
      carlushuang authored
      * add reference attention fwd
      
      * refactor addresser
      
      * update
      
      * paged, and i8 reflect-quant
      
      * lets call it forward-quant
      
      * fix error in decode variation
      
      * update naive-attn
      
      * fix page table
      
      * fix build err
      77a38e02
  8. 10 Dec, 2024 4 commits
  9. 09 Dec, 2024 3 commits
  10. 06 Dec, 2024 5 commits
  11. 05 Dec, 2024 2 commits
  12. 04 Dec, 2024 2 commits
  13. 03 Dec, 2024 2 commits
    • Bartłomiej Kocot's avatar
      Add basic documentation structure (#1715) · 5affda81
      Bartłomiej Kocot authored
      * Add basic documentation structure
      
      * Add terminology placeholder
      
      * Add codegen placeholder
      
      * Create template for each page
      5affda81
    • Illia Silin's avatar
      OCP FP8 support for gfx12. (#1710) · 08d5c02c
      Illia Silin authored
      * (2/5) bilinear gemm pass, perf bug: skip a lds has lower performance than skip b lds
      
      * (3/5) batched gemm pass, perf bug: skip a lds has lower performance than skip b lds
      
      * (4/5) grouped conv pass
      
      * (5/5) attention pass, todo: debug lds perf bug
      
      * AIT Attention API refactor (#8)
      
      * sanity pass
      
      * sanity pass 2
      
      * confirm significant performance regression.
      
      * turn on all instances
      
      * turn off instance format
      
      * Fix bug & tunning & format
      
      * DML meta, self_attn+cross_attn
      
      * sanity pass
      
      * remove useless flag
      
      * update tile and problem size used in AIT attention
      
      * bug fix in grouped conv supporting check
      
      * deprecate inline asm wmma
      
      * Bug fix: double lds skip
      
      * clang-format
      
      * Fix errors in
      1. example, fmha
      2. gridwise pipeline
      3. deviceop, fmha, change some containers from vector to array
      
      * part2 of previous commit
      
      * clang format
      
      * API fix of gridwisegemmpipeline
      
      * separate array base and vector base attention...
      08d5c02c
  14. 02 Dec, 2024 2 commits
  15. 30 Nov, 2024 2 commits