1. 28 Aug, 2024 1 commit
    • Po Yen Chen's avatar
      [CK_TILE] Add PagedAttention kernels (#1387) · c1569892
      Po Yen Chen authored
      
      
      * Use dictionary to config all the functions
      
      * Add init codegen logic for fmha fwd appendkv
      
      * Call HIP_CHECK_ERROR() macro to get real source info
      
      * Setup meaningfull arguments
      
      * Sync kernel name with the codegen
      
      * Add knew/vnew tensors to the kernel argument
      
      * Fix wrong K values after appending
      
      * Fix vnew append errro
      
      * Extract common logics
      
      * Fix Vnew tile dstr for row major case
      
      * Conditionally add fwd_splitkv API in fmha_fwd example
      
      * Conditionally add call to fmha_fwd_splitkv()
      
      * Remove "EXAMPLE_" prefix of cmake variables
      
      * Regsiter API handlers automatically
      
      * Early return if 0 < s_k_new is not supported
      
      * Show message if we are ignoring option
      
      * Unify CMakeLists.txt coding style
      
      * Set num_splits=1 if split-kv is not supported
      
      * Add length/stride getters for HostTensor
      
      * Add RoPE example utilities
      
      * Add reference_rotary_position_embedding() (not implemented)
      
      * Finish reference_rotary_position_embedding() impl
      
      * Fix typo of HostTensor<>::get_length()
      
      * Fix compilation errors
      
      * Fix wrong answer when interleaved=false
      
      * Fix wrong answer when interleaved=true
      
      * Append K/V in the host verification code
      
      * Simplify K appending logics
      
      * Simplify v_host_ref definition
      
      * Reduce input/output dimensions
      
      * Rename function: add "batched" prefix
      
      * Apply RoPE on host side
      
      * Rename RoPE utility function
      
      * Fix wrong tensor size
      
      * Avoid invoking deprecated method 'find_module'
      
      * Pass RoPE kernel args
      
      * Create Rotary Cos/Sin tile windows in kernel
      
      * Add compute data type alias for RoPE
      
      * Randomly generate seqlen_knew if needed
      
      * Fix seqlen_knew enabling check logic
      
      * Add minimum seqlen_k to generate compliance kvcache
      
      * Fix compilation error in debug mode
      
      * Fix wrong boundaries
      
      * Fix wrong seqlen_k for kvcache
      
      * Rename variables used in distributio encoding
      
      * Fix rotary cos/sin tensor/tile size
      
      * Add constraint to the rotary_dim option
      
      * Remove unused inner namespace
      
      * Add dram distribution for rotary_cos/rotary_sin (interleaved)
      
      * Only apply interleaved RoPE on Knew for now
      
      * Fix wrong thread starting offset
      
      * Instantiate multiple kernels for RoPE approaches
      
      * Clean-up pipeline
      
      * Fix error in RoPE host reference
      
      * Handle RoPE half-rotated logics
      
      * Support 8x rotary_dim under half-rotated RoPE
      
      * Add comment
      
      * Apply elementwise function to the loaded tiles
      
      * Unify parameter/variable naming style
      
      * Remove constness from q_ptr
      
      * Add code blocks for q_tile
      
      * Apply RoPE to q_tile
      
      * Remove debug print code in kernel
      
      * Fix wrong knew/vnew appending positions
      
      * Use better naming for tile indices
      
      * Add make_tile_window() for adding distribution only
      
      * Skip code if # of block is more than needed
      
      * Move thread locating logics into policy
      
      * Remove always true static_assert()
      
      * Rename header
      
      * Rename RotaryEmbeddingEnum
      
      * Extract rotary embedding logic out
      
      * Re-order parameters
      
      * Align naming of some tile size constants
      
      * Rename more tile size constants
      
      * Fix wrong grid size
      
      * Fix wrong shape of knew_host/vnew_host
      
      * Fix wrong index into knew_host/vnew_host
      
      * Fix wrong rotary_cos/rotary_sin memory size for Q
      
      * Extract Q/Knew vector size to helper methods
      
      * Use different rotary_cos/rotary_sin distr for Q/Knew
      
      * Update host/device specifiers
      
      * Fix wrong data type for Q rotary_cos/rotary_sin
      
      * Remove RoPEComputeDataType type alias
      
      * Shift rotary_cos/rotary_sin by cache_seqlen_k
      
      * Add comment for why I just 't' for all padding flags
      
      * Align commit message to the real comment
      
      * Fix wrong pipeline
      
      * Rename utility function
      
      * Disable host verification if API not exist
      
      * Fix wrong rope key for fp8 pipeline
      
      * Allow only apply RoPE on Q (without append KV)
      
      * Add append-kv smoke tests
      
      * Remove debug statements
      
      * Remove more debug statements
      
      * Re-arrange the 'set +x' command
      
      * Remove no-longer used method in pipeline
      
      * Add missing init code
      
      * Refine pipeline padding settings
      
      * Enlarge rotary_dim limit (8 -> 16)
      
      * Enlarge KPerThread for rotary_interleaved=false
      
      * Update rotary_dim range in smoke_test_fwd.sh
      
      * Add template argument 'kIsPagedKV' for splitkv kernels
      
      * Launch splitkv kernel if given page_block_size
      
      * Fix wrong kernel name
      
      * Fix seqlen_k_min for pre-fill case (1 -> 0)
      
      * Add copy_const<> type trait
      
      * Add another make_tile_window()
      
      * Introduce 'TileWindowNavigator' types
      
      * Simplify TileWindowNavigator interfaces
      
      * Fix tile window navigation bugs
      
      * Disable calling fmha_fwd()
      
      * Remove ununnecessary data members
      
      * Simplify more make_tile_window() overloads
      
      * Move V tile through TileWindowNavigator
      
      * Fix uneven split checking logic
      
      * Move code after decide seqlen_q/seqlen_k
      
      * Make sure we always start reading complete tile
      
      * Use 128 as minimus page_block_size
      
      * Fix wrong origin for bias
      
      * Add batch_stride_k/batch_stride_v in group mode
      
      * Unify origin
      
      * Add missing kernel arguments for group mode
      
      * Add paged-kv codegen logic for appendkv kernels
      
      * Add block_table kernel args for appendkv kernel
      
      * Add tile navigators to the appendkv kernel
      
      * Fix wrong tensor descriptor lengths
      
      * Pass re-created tile window to pipeline
      
      * Fix wrong strides for appendkv kernel
      
      * Allow transit tile_window to another page-block
      
      * Handle cross-page-block write
      
      * Donot perform write again if already in last page-block
      
      * Always add fmha_fwd() api
      
      * Add missing group mode argument
      
      * Remove debug macro usages
      
      * Rename option s_k_new to s_knew
      
      * Separate splitkv/non-splitkv args/traits
      
      * Remove fmha_fwd_dispatch()
      
      * Fix compilation errors
      
      * Remove dropout code in splitkv kernel
      
      * Allow problem types without define kHasDropout attr
      
      * Use generic lambda to init traits objects
      
      * Separate more non-splitkv & splitkv traits/args
      
      * Display more info for specific kernels
      
      * Show more detailed warning message
      
      * Rename 'max_num_blocks' to 'max_num_page_blocks'
      
      * Remove no-longer used pipeline files
      
      * Wrap code by #if directives
      
      * Move functors to the begining of validation code
      
      * Use generic lambda to init all the api traits/args
      
      * Fix wrong seqlen for kvcache
      
      * Add missing comment
      
      * Rename TileWindowNavigator to PageBlockNavigator
      
      * Only expose necessary methods (not attributes)
      
      * Re-order pipeline paremeters
      
      * Refine smoke_test_fwd.sh
      
      * Fix wrong arugment count
      
      * Make tile window directly via PageBlockNavigator
      
      * Remove unused template paremeter
      
      * Remove group mode from appendkv kernel
      
      * Fix skcheck logic
      
      * Fix wrong syntax in skcheck expr
      
      * Use meaningful options in smoke test
      
      * Remove options
      
      * Fix formatting
      
      * Fix more format
      
      * Re-organize bash functions
      
      * Pass cache_batch_idx to kernels
      
      * Support cache_batch_idx in example
      
      * Fix compilation error
      
      * Add more appendkv test
      
      * Add more case for appendkv
      
      * Fix unexisted attribute
      
      * Remove 0 < seqlen_knew constraint
      
      * Clarify the case in warning message
      
      * Remove macro checking
      
      * Force batch mode when invoking appendkv & splitkv apis
      
      * Fix mode overriding logics
      
      * Fix wrong parameter name
      
      * Randomize seqlen_k if use kvcache
      
      * Use randomized seqlen_k for kvcache
      
      * Avoid using too small rotary_cos & rotary_sin
      
      * Rename parameter
      
      * Add seqlen_q & seqlen_k rules
      
      * Add comment
      
      * Add more comments
      
      * Fix compilation errors
      
      * Fix typo in comment
      
      * Remove type argument
      
      * Avoid seqlen_k=0 for kvcache
      
      * Revert "Avoid seqlen_k=0 for kvcache"
      
      This reverts commit 21c4df89e416182e8e9bc78e67bd4b98dbb6c88d.
      
      * Fix wrong uneven split checking logics
      
      * Only randomize kvcache seqlen_k if 1 < batch
      
      * Return earlier if split is empty
      
      * Revert "Only randomize kvcache seqlen_k if 1 < batch"
      
      This reverts commit b9a4ab0d7e3c2beecc0fccafd2a13259dd06299c.
      
      * Re-order seqlen_k_start adjustment logics
      
      * Fix compilation errors
      
      * Re-format script
      
      * Find executable from folder automatically
      
      * Fix kvcache seqlen_k generating logic
      
      * Make comment more clear
      
      * Fix wrong knew/vew appending logic on host
      
      * Add s_barrier to sync threads
      
      * Revert "Add s_barrier to sync threads"
      
      This reverts commit d3f550f30c0a4d9df15c613015d5dff268d6746d.
      
      * Support only using 1 row of rotary_cos/rotary_sin
      
      * Rotate Q in different way
      
      * Unify tensor view creation logics
      
      * Fix wrong argument
      
      * Add mask to switch how we use the rotary_cos/sin
      
      * Move attr from traits to problem
      
      * Move has_mask to fmha_fwd_appendkv_args
      
      * Support use uint32_t as SAD operand in Alibi<>
      
      * Use sad_u32() in splitkv kernels
      
      * Store tensor views in PageBlockNavigator
      
      * Use stored tensor view to update tile windows
      
      * Enlarge tensor view size
      
      * Remove debug code
      
      * Fix wrong tensor view size
      
      * Wrap tensor view into PageBlockNavigator
      
      * Add DataType member to PageBlockNavigator
      
      * Remove unnecessary member functions
      
      * Refind macro use
      
      * Fix typo
      
      * Add blank line between directives and actual code
      
      * Re-format files
      
      * Remove type in comment
      
      ---------
      Co-authored-by: default avatarcarlushuang <carlus.huang@amd.com>
      Co-authored-by: default avatarrocking <ChunYu.Lai@amd.com>
      c1569892
  2. 21 Aug, 2024 2 commits
    • Andriy Roshchenko's avatar
      Adding Instances and Examples for FP8-based Scaled Convolution and AMAX Reduction. (#1473) · c3515f27
      Andriy Roshchenko authored
      * Enable CMakePresets build
      
      * Verify Convolution, Scaling and ReLU algorithms.
      
      * Add tensor element-wise scale and type cast operation.
      
      * Reduction implemented but does not work.
      
      * Exploration of Reduction functionality.
      
      * Completed example for Convolution scaled with ReLu activation and AMAX reduction.
      
      * WIP: Add required instances for convolution.
      
      * WIP: Create client example. Implement convolution stage.
      
      * Add elementwise instances.
      
      * Add elementwise scale + convert example.
      
      * Add reduction instances.
      
      * WIP: Client example for AMAX reduction.
      
      * WIP: Add instances for multistage reduction.
      
      * WIP: Implementation of multistage reduction.
      
      * Refactoring.
      
      * Clean up.
      
      * Add CMakePresets.json
      
      * Guard off FP8 instances when the data type is not available.
      
      * Add example for Scaled FP8 Convolution with AMAX reduction.
      
      * Refactor CombConvScaleRelu instances.
      
      * Add CombConvScale instances.
      
      * Add client example for Scaled FP8 Convolution with AMAX reduction.
      
      * Cleanup.
      c3515f27
    • Rostyslav Geyyer's avatar
      Set RNE fp8 conversion as a default (#1458) · e20f20ef
      Rostyslav Geyyer authored
      * Set RNE fp8 conversion as a default
      
      * Update f8 tests
      
      * Disable failing test on gfx11
      
      * Update bf8 tests
      
      * Add a flag
      
      * Fix the flag
      
      * Raise flag for gfx10 as well
      
      * Temp commit for tolerance testing
      
      * Update tolerances
      e20f20ef
  3. 16 Aug, 2024 1 commit
    • Dan Yao's avatar
      [CK_TILE] FA bwd kernels optimization (#1397) · 79a5d9c1
      Dan Yao authored
      
      
      * tmp save
      
      * fix batch deterministic bugs
      
      * fix group deterministic bugs
      
      * codegen update
      
      * reorder files
      
      * bias support
      
      * hd256 bias support
      
      * bwd smoke test update
      
      * simplify convert dq
      
      * fix hd256 dropout scratch
      
      * do{}while() -> while(){}
      
      * comments
      
      * remove FmhaBwdTilePartitioner
      
      * save clear_tile
      
      * refactor dropout
      
      * code cleanup
      
      * code cleanup
      
      * comments
      
      * fix epilogue problem
      
      * fix fwd dropout
      
      * group convert_dq opt
      
      * fix dq alignment
      
      * Do not store storerandval in bwd for flash attention integration
      
      * fix hd32 error and boost performance
      
      * revert
      
      * Remove duplicated WarpGemm definitions in the policy file
      
      * dropout patch for mrepeat 16*16
      
      * code sync up
      
      * dq_acc stride
      
      * dq_acc stride stuff
      
      * codegen update
      
      * fwd dropout revert
      
      * fix hd128 scratches and boost performance
      
      * receipt 3 for simplified smoke test
      
      * more strides for fa integration
      
      * fix hd64 scratches and boost performance
      
      * non-iglp pipeline for headdim padding cases
      
      * dpad same as dvpad for flash attention integration
      
      * unpadded lse&d for group mode
      
      * Support unpad layout for group lse
      
      * Support unpad lse layout for splitkv
      
      * Fix stride for splitkv kernel
      
      * fix unpadded lse issue in fwd splitkv
      
      * comment
      
      * solve lds read&write conflicts
      
      * rename
      
      * bias rename
      
      * tile index revert
      
      ---------
      
      Co-authored-by: danyao12 <danyao12>
      Co-authored-by: default avatarrocking <ChunYu.Lai@amd.com>
      Co-authored-by: default avatarQianfeng Zhang <Qianfeng.Zhang@amd.com>
      79a5d9c1
  4. 14 Aug, 2024 1 commit
    • Haocong WANG's avatar
      [GEMM] gemm_universal related optimization (#1453) · 3049b546
      Haocong WANG authored
      
      
      * replace buffer_atomic with global_atomic
      
      * fixed global_atomic_add
      
      * added bf16 atomic_add
      
      * format
      
      * clang-format-12
      
      * clean
      
      * clean
      
      * add guards
      
      * Update gtest.cmake
      
      * enabled splitk_gemm_multi_d
      
      * format
      
      * add ckProfiler
      
      * format
      
      * fixed naming
      
      * format
      
      * clean
      
      * clean
      
      * add guards
      
      * fix clang format
      
      * format
      
      * add kbatch printout
      
      * clean
      
      * Add rocm6.2 related gemm optimization
      
      * Limit bf16 atomic usage
      
      * remove redundant RCR gemm_universal instance
      
      * Add RRR fp8 gemm universal instance
      
      * Bug fix
      
      * Add GPU_TARGET guard to FP8/BF8 target
      
      * bug fix
      
      * update cmake
      
      * remove all fp8/bf8 example if arch not support
      
      * Enable fp8 RRR support in ckProfiler
      
      * limit greedy-reverse flag to gemm_universal in ckProfiler
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizhan@fb.com>
      Co-authored-by: default avatarJing Zhang <jizhan@meta.com>
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      Co-authored-by: default avatarIllia Silin <98187287+illsilin@users.noreply.github.com>
      Co-authored-by: default avatarillsilin <Illia.Silin@amd.com>
      3049b546
  5. 13 Aug, 2024 1 commit
  6. 10 Aug, 2024 1 commit
  7. 09 Aug, 2024 1 commit
  8. 07 Aug, 2024 1 commit
  9. 06 Aug, 2024 3 commits
  10. 31 Jul, 2024 2 commits
  11. 30 Jul, 2024 1 commit
  12. 25 Jul, 2024 1 commit
  13. 24 Jul, 2024 2 commits
    • Andriy Roshchenko's avatar
      Adding more instances of grouped convolution 3d forward for FP8 with... · 4a8a1bef
      Andriy Roshchenko authored
      Adding more instances of grouped convolution 3d forward for FP8 with ConvScale+Bias element-wise operation. (#1412)
      
      * Add CMakePresets configurations.
      
      * Add binary elementwise ConvScaleAdd and an example.
      
      * Numerical verification of results.
      
      Observed significant irregularities in F8 to F32 type conversions:
      ```log
      ConvScaleAdd: float=145.000000   f8_t=160.000000    e=144.000000
      ConvScaleAdd: float=97.000000   f8_t=96.000000    e=104.000000
      ConvScaleAdd: float=65.000000   f8_t=64.000000    e=72.000000
      ```
      
      * Implemented ConvScaleAdd + Example.
      
      * Add ConvScale+Bias Instances
      
      * Add Client Example for ConvScale+Bias
      
      * Fix number of bytes in an example..
      
      * Cleanup.
      4a8a1bef
    • Bartłomiej Kocot's avatar
      Add support for half_t and bfloat to reduction operations (#1395) · ffabd70a
      Bartłomiej Kocot authored
      * Add support for half_t and bfloat to reduction operations
      
      * Fix bhalf convert
      
      * Next fix bf16
      ffabd70a
  14. 22 Jul, 2024 1 commit
  15. 19 Jul, 2024 3 commits
    • Haocong WANG's avatar
      [GEMM] F8 GEMM, performance optimized. (#1384) · 8c90f25b
      Haocong WANG authored
      
      
      * add ab_scale init support
      
      * enabled interwave
      
      * add scale type; update isSupport
      
      * adjust example
      
      * clean
      
      * enable f8 pure gemm rcr ckprofiler
      
      * Add gemm_multiply_multiply instances
      
      * clang format
      
      * Optimize for ScaleBlockMNK=128
      
      * enable abscale f8 gemm ck profiler
      
      * Add pure f8 gemm test suite
      
      * Reverting to the state of project at f60fd77
      
      * update copyright
      
      * clang format
      
      * update copyright
      
      ---------
      Co-authored-by: default avatarroot <jizhan@amd.com>
      8c90f25b
    • ltqin's avatar
      Universal gemm splitk using reduce (with multi-d) (#1341) · c544eb4d
      ltqin authored
      
      
      * init for reduce_threadwise multi_d
      
      * add reduce_threadwise_multi_d
      
      * add reduce_multi_d
      
      * clean
      
      * start add an other splitk device op
      
      * add reduce template parameter to SplitKBatchOffset
      
      * add reduce c matrix
      
      * clean up code
      
      * change example data type to bf16
      
      * add bf16Ai8B example
      
      * remove reduce template parameter
      
      * add splitk atomic status to v4
      
      * example add multi d parameters
      
      * device op add multi-d parameters
      
      * add multi-d to reduce
      
      * fix kbach=1 bug
      
      * change B layout to col in  bf16Ai8B example
      
      * remove float adding struct
      
      * change  multi-d interface
      
      * change file and class name
      
      * remove multi-d of bf16Ai8B example
      
      * change IsReduce function to IsReduceAdd
      
      * change example layout to RRR from RCR
      
      * according layout to set ds stride
      
      * reset parameter layout
      
      * add gemm universal reduce instance
      
      * add reduce factory
      
      * add profile_gemm_universal_reduce
      
      * add reduce to profiler
      
      * fix reduce instance
      
      * fix profiler reduce compiling bug
      
      * format
      
      * format library instance code
      
      * add mem instance for reduce library
      
      * fix call instance names
      
      * add workspace for reduce in ckProfiler
      
      * format
      
      * add mnpading to reduce library instance
      
      * add fp16 instance to reduce of profiler
      
      * change copyright time
      
      * restore profiler cmake file
      
      * add reduce text to instances
      
      * add DsLayout and DsDataType to instances template parameter
      
      * fixed gemm_reduce_multi_d
      
      * add an example without multi_d
      
      * Update common.hpp
      
      * Update gtest.cmake
      
      * Update gemm_xdl_splitk_reduce_bf16.cpp
      
      * clean
      
      * Update gtest.cmake
      
      * format
      
      * fixe api
      
      * format
      
      * default parameter change to RRR
      
      * add vector_len for multi_d
      
      * format
      
      * Update gtest.cmake
      
      * fix bf16A iBB elementwiseop
      
      * add ReduceDataType
      
      * move ReduceDataType to end position
      
      * format
      
      * remove googletest git method  address
      
      * fix copyright time
      
      * update init data
      
      ---------
      Co-authored-by: default avatarroot <jizhan@amd.com>
      Co-authored-by: default avatarletaoqin <letaoqin@amd.com>
      Co-authored-by: default avatarJing Zhang <jizhan@meta.com>
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      c544eb4d
    • Bartłomiej Kocot's avatar
      Refactor transform conv to gemm fwd (#1391) · 70a814f1
      Bartłomiej Kocot authored
      * Refactor transform conv to gemm fwd
      
      * fixes codegen
      
      * wmma fixes
      
      * fix wmma
      
      * Fix copyright
      70a814f1
  16. 17 Jul, 2024 1 commit
  17. 16 Jul, 2024 1 commit
  18. 12 Jul, 2024 1 commit
  19. 08 Jul, 2024 1 commit
    • carlushuang's avatar
      [CK_TILE] wa prec, remove sgpr offset for inline asm (#1356) · 8182976c
      carlushuang authored
      
      
      * wa prec, remove sgpr offset for inline asm
      
      * macro for set tile
      
      * ignore unused param if no kernel instances in host API
      
      * fix more prec issue
      
      * cache buffer resource
      
      * fix
      
      * support pre-nop
      
      * clear tile by vector type members
      
      * add workaround to reduce scratch memory
      
      * conditionally enable workaround code
      
      * enable workaround start from certain build version
      
      * fallback set_tile() implementation from certain build version
      
      * undo template argument changes
      
      * put dummy asm in load_raw()
      
      * fix comments, refactor s_nop inside buffer_load
      
      ---------
      Co-authored-by: default avatarPoYen, Chen <PoYen.Chen@amd.com>
      8182976c
  20. 06 Jul, 2024 1 commit
    • Harisankar Sadasivan's avatar
      Universal streamk with atomics (#1360) · 75e622f0
      Harisankar Sadasivan authored
      * universal streamk with atomics with ckprofiler support. grid_size and streamk strategy are tunable. grid_size of -1 leads to #WGs = maximum occupancy X num_CUs. implementation supports many different streamk policies: 1-tile, 2-tile, 3-tile and 4-tile. streamk strategy of -1 leads to default streamk policy (4-tile). 
      
      * Update README.md
      
      * fixing clang-format issues
      
      * removed conflicts in struct members between streamk and universal streamk
      
      * corrected arg parsing for streamk and universal streamk
      
      * added stream-k policies for 3 tile and 4 tile
      
      * fixed argument type issue with parsing cmd args
      
      * changes suggested in PR review are made- removing comments and correcting copyright
      
      * file permissions updated
      
      * added default value support for grid_size and streamk-policy selection set to -1
      
      * print messages for arguments
      
      * print messages for arguments
      
      * print messages for arguments1
      75e622f0
  21. 04 Jul, 2024 2 commits
  22. 27 Jun, 2024 2 commits
  23. 26 Jun, 2024 2 commits
    • Po Yen Chen's avatar
    • Po Yen Chen's avatar
      [CK_TILE] fmha forward split-kv + combine kernels (#1338) · 0cb2e06d
      Po Yen Chen authored
      
      
      * FA fwd dropout
      
      * FA bwd
      
      * epilogue reuse
      
      * CMakeLists update
      
      * [CK_TILE] support alibi (#1269)
      
      * add alibi support
      
      * fix code
      
      * update code based on comment
      
      * Support more hdim
      
      * fix fp8 bias
      
      * support seqlen_k=0 case
      
      * remove unused printf
      
      * fix format
      
      ---------
      Co-authored-by: default avatarrocking <ChunYu.Lai@amd.com>
      
      * now fwd/bwd can build
      
      * bwd alibi
      
      * add bwd validation stream_config
      
      * update generated filenames
      
      * update bwd kernel launch
      
      * CK_TILE_HOST_DEVICE in philox
      
      * Transpose -> transpose
      
      * format
      
      * format
      
      * format
      
      * Generate the instance for FA required
      
      * format
      
      * fix error in WarpGemm
      
      * Add num_splits option and dummy split-kv api method
      
      * Generate fmha_fwd_splitkv()
      
      * Add SplitKV kernel codegen logics
      
      * Add SplitKV combine kernel codegen logics
      
      * Fix mismatched return type
      
      * Clean-up code
      
      * Replace sentinel value before storing
      
      * Fix wrong layout of LSE/LSEacc/Oacc
      
      * Format codes
      
      * Fix o_acc memory error
      
      * Fix wrong kBlockSize used in policy
      
      * Reduce # of combine kernels
      
      * Fix split-kv combine kernel name
      
      * Fix wrong LDS indexing logics
      
      * Fix wrong loop counter step logic
      
      * Undo vector size changes
      
      * Remove no-longer used field
      
      * Remove in-consistent comment
      
      * Remove debug statements in example
      
      * Remove more debug statements
      
      * Add constness to local variables
      
      * Clearn up generate.py
      
      * Fix unstable clang-format comment
      
      * Remove unused include directive
      
      * Use shorter template parameter name
      
      * Enable non-split-kv blobs
      
      * Update license date
      
      * Print num_splits conditionally
      
      * Undo disabling data types
      
      * Remove unnessary tile size for fp8
      
      * Fix wrong pipeline args for fp8
      
      * Fix example output format
      
      * Remove more debug code in combine pipeline
      
      * Add stride kernel arguments for LSE/O acc workspace
      
      * Re-order split-kv pipeline call operator arguments
      
      * Pass LSE/O strides in kernel argument
      
      * Re-order pipeline call operator arguments
      
      * Use tensor_descriptor to locate LSEacc elements
      
      * Support providing invalid element for tensor view
      
      * Set invalid element value for LSEacc tensor view
      
      * Remove hand-written store_tile() code
      
      * Remove necessary value-overwrite logic
      
      * Add transposed lds descriptor
      
      * Support load_tile() for tile_window_with_static_lengths<>
      
      * Undo removing necessary value-overwrite logic
      
      * Use read descriptor to locate lds elements
      
      * Simplify pipeline source code
      
      * Add constraint to kMaxSplits
      
      * Default use kMaxSplits=64 in generate.py
      
      * Revert "Add constraint to kMaxSplits"
      
      This reverts commit 0a2132d758042e6fb0292f4e354909b8a4d1c118.
      
      * Revert "Default use kMaxSplits=64 in generate.py"
      
      This reverts commit c7d9c80b77320aec6559222bed7d47adcaefe4e3.
      
      * Decide alignment by the padding parameter
      
      * Remove no-longer used utility functions
      
      * Remove not-working code
      
      * Add comment & remove no-longer used code
      
      * Fix computation errors
      
      * Add heuristic to override num_splits option
      
      * Add constraint to kMaxSplits
      
      * Fix compilation error
      
      * Clean up pipeline code
      
      * Wrap pointer access as lambda function
      
      * Rename confusing methods
      
      * Use kLogMasSplits as template parameter
      
      * Finish splitkv combine kernel codegen
      
      * Update kMaxSplits limit
      
      * Use smaller kM0 for splitkv combine kernel
      
      * Ignore droupout flag in splitkv pipeline
      
      * Unify flag usage
      
      * Add back flag kStoreLSE
      
      * Merge lambda calls in pipeline
      
      * Fix compilation errors
      
      * Avoid all empty splits
      
      * Always check for empty loop in splitkv pipelines
      
      * Re-order parameters
      
      * Remove redundant p_drop option check
      
      * Add traits/problem for fwd splitkv kernel
      
      * Conditionally enable uneven split boundary checks
      
      * Add comment for the splitkv traits field
      
      * Change even split criteria
      
      * Re-order statements
      
      * Refine occupancy value for hdim=128&256
      
      * Refine occupancy value for hdim=32&64
      
      * Remove redundant kernel argument
      
      * Separate fmha bwd codegen logics
      
      * Separate fmha fwd codegen logics
      
      * Remove redundant direction parameter in fwd&bwd codegen logics
      
      * Support generate multiple APIs for an example
      
      * Let 'api' an alias of 'direction' option
      
      * Remove choices for the 'direction' option
      
      * Use dictionary to config all the functions
      
      * Move fmha splitkv codegen logics to other file
      
      * Add fwd_splitkv api for tile_example_fmha_fwd
      
      ---------
      
      Co-authored-by: danyao12 <danyao12>
      Co-authored-by: default avatarcarlushuang <carlus.huang@amd.com>
      Co-authored-by: default avatarrocking <ChunYu.Lai@amd.com>
      Co-authored-by: default avatarJing Zhang <jizhan@amd.com>
      0cb2e06d
  24. 25 Jun, 2024 1 commit
    • arai713's avatar
      CK Instance Gen (#1145) · 3e9711f0
      arai713 authored
      
      
      * Format
      
      * Format
      
      * Format
      
      * Remove const
      
      * Use the right template
      
      * Format
      
      * Format
      
      * add row/col instances
      
      * Add missing file
      
      * fixed
      
      * fixing block to etile error
      
      * Format
      
      * Updates
      
      * Format
      
      * fixed rrr layout
      
      * generating a sample JSON file: currently contains includes, prologue/epilogue and instances
      
      * version where the json is passed into the instances to generate a key
      
      * updated run function to just launch kernel
      
      * updated run function: only contains kernel object, json file is updated but still needs to be cleaned up, added front-end API to parse JSON into character buffer
      
      * adding in testing files
      
      * cleaned up comments, still need to work on including header files
      
      * removed unneeded files
      
      * removed/commented out JSON implementation
      
      * added fusion(prologue/epilogue) into instance generation
      
      * working on instance selection
      
      * added instance selection, need to fix instance validation
      
      * removed block2etile map validity check for testing purposes
      
      * test running: failing due to incorrect files/input
      
      * all grid descs/ptrs completed, but device file not found
      
      * Update test and embed modules
      
      * Restore older version
      
      * added convolution operation, written test, debugging generated code for compilation
      
      * attempting to include CK in host directory: _Float16 error
      
      * CK header file issues
      
      * slight fix
      
      * don't crash when hip can't report total memory
      
      * dump generated code to a file
      
      * changing sizes
      
      * creating tensor descriptors using CK methods: set up grid desc manually, also trying to set up an argument pointer - this needs to be fixed
      
      * some fixes to call the device code
      
      * separating test files for conv and gemm
      
      * completed arg ptr, now have linking errors
      
      * clang format fix
      
      * resolved linker issues in conv test
      
      * remove dependency on libutility from ck
      
      * resolved num dim error
      
      * properly passing arg ptr, errors with passing typenames: redefinition/redeclaration
      
      * undo the commenting of device function
      
      * hand created kernel code to find rtc issues
      
      * dump the full src to file
      
      * resolved redeclaration errors, cleaned up errors for Amber's kernel code
      
      * debugging purposes: redeclaration error
      
      * config files
      
      * resolved errors for NumTensor and redeclaration, formatted version.h
      
      * resolved most errors in manually added kernel and my own. error with calling kernel object: overloaded function type
      
      * WIP: close to getting kernel compiled
      
      * WIP: fixing rtc errors
      
      * fixed sequence errors, formatting, still one error with run fcn
      
      * yay: kernel compiles and runs
      
      * updated templated/generated version to run and compile
      
      * minor fixes
      
      * working generated example, resolved memory access error due to padding
      
      * adding in reference kernel, validation failing against reference
      
      * debugging: printing kernel argsz
      
      * reduced error in results
      
      * debugged reference kernel and output errors, added to generated version, currently debugging prologue function issues
      
      * working validation (using reference convolution) with prologue function for both hard-coded and generated version
      
      * WIP: create an alt version that creates Argument on the device
      
      * wip: added new duplicate files, fixed fusion templating errors from working example, setting up kernel arguments
      
      * wip: making necessary methods device code
      
      * added grid descs, working on grid pointers, errors with stl numerics
      
      * wip: updating kernel args - issue, replacing some std functions
      
      * replaced std::accumulate call with temp hardcoded version
      
      * wip: args causing memory issue
      
      * Construct Argument object inside the kernel and use it to call convolution device function. Code runs and verification passes
      
      * adding object file dump
      
      * temporary hardcoding of grid size, can remove device op inst + arg ptr
      
      * minor fix for grid size
      
      * added modified example where arg ptr is created on the device for generated version as well
      
      * removed device op instance and arg ptr from modified examples
      
      * moving device op file for testing purposes and to properly build CK
      
      * commenting out print-outs
      
      * adjust compiler args to produce a valid ELF file
      
      * temporary removal of validation
      
      * reverting compiler args back for working example
      
      * retrieve necessary arguments from generated template parameters in correct format
      
      * calculating grid size on host-side, still need to clean up process, pass parameters to host functions properly
      
      * scaled up factory functions/wrapper structs to implement host-side launch parameter calculations using CK host side functions - in hard-coded example
      
      * temporary change to generate ELF format binary object file
      
      * removed unecessary code, added comments
      
      * formatting fix
      
      * cleaned up code, added new tests, restructured library: move helper into CK
      
      * refactored launch parameter calculation to be more concise
      
      * renamed files and variables for more clarity/uniformity
      
      * more code cleaning, removed debug statements
      
      * moved majority of my files into codegen directory, running properly
      
      * updated Embed.cmake(string_view) in codegen directory
      
      * updated host directory to match Embed.cmake as well
      
      * added old tests in
      
      * updated instance generation methods to be more concise
      
      * removed layout from launch parameter calculation
      
      * working test
      
      * fixed issue with verification, all instances working
      
      * updated verification in other tests
      
      * removed duplicate matrix padder file, removed code dumps
      
      * removed old hard-coded tests
      
      * removed old host directory, all files in codegen directory now
      
      * fixed copyright in files
      
      * commenting out validation
      
      * renamed files
      
      * made changes for review: fixed copyright, renamed files for clarity, removed comments, refactored code
      
      * updated headers
      
      * removing duplicate file for fwd conv to gemm, merging with original file
      
      * fix building codegen with clang++ directly
      
      * resolving build error from conv_fwd_to_gemm
      
      * fix for previous error
      
      * renaming tests
      
      * created common test file
      
      * cleaned up code, added comments
      
      * renamed device op
      
      * fixed typos in comments
      
      * removed extra space
      
      * code cleanup: resolving Amber's comments
      
      * removed wrapper struct for matrix padder, fixed template
      
      * cleaned up if statements for better readability
      
      ---------
      Co-authored-by: default avatarPaul <pfultz2@yahoo.com>
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      Co-authored-by: default avatarM. Amber Hassaan <amber_474@yahoo.com>
      Co-authored-by: default avatarillsilin <Illia.Silin@amd.com>
      Co-authored-by: default avatarIllia Silin <98187287+illsilin@users.noreply.github.com>
      3e9711f0
  25. 24 Jun, 2024 1 commit
  26. 21 Jun, 2024 2 commits
  27. 20 Jun, 2024 3 commits