1. 15 Jul, 2023 1 commit
  2. 04 Jul, 2023 1 commit
  3. 14 Jun, 2023 1 commit
  4. 17 Apr, 2023 1 commit
  5. 14 Apr, 2023 1 commit
  6. 21 Mar, 2023 1 commit
  7. 06 Mar, 2023 1 commit
  8. 02 Mar, 2023 1 commit
  9. 21 Feb, 2023 1 commit
    • guangzlu's avatar
      Added dropout verification for flash attention forward (#593) · 042e4b8c
      guangzlu authored
      
      
      * saved dropout random number in gridwise_batched_gemm_softmax_gemm_xdl_cshuffle_v2.hpp
      
      * modified device_grouped_gemm_softmax_gemm_permute_train_xdl_cshuffle.hpp
      
      * added z tensor for dropout storing
      
      * added z in example
      
      * can compile now
      
      * change fp16 xdl to bf16
      
      * fixed some bugs in example
      
      * changed fwd file names
      
      * fixed some bugs in fwd drop verify
      
      * Delete device_grouped_multihead_attention_forward_xdl_cshuffle
      
      * Fwd drop verify2 (#585)
      
      * fixed some bugs in fwd drop verify
      
      * Delete device_grouped_multihead_attention_forward_xdl_cshuffle
      
      * added group fwd mha dropout verify
      
      * added dropout verify for grouped mha fp16 fwd
      
      * added bf16 fwd attn dropout verify
      
      * added dropout verify to batched mha fwd
      
      * added batched fla fwd bf16 dropout verify
      
      * changed some format
      
      * added switch for lse storing in attn fwd
      
      * added switch for lse storing in attn fwd
      
      * resolved conflicts in reference_dropout.hpp
      
      ---------
      Co-authored-by: default avatarltqin <letao.qin@amd.com>
      042e4b8c
  10. 12 Feb, 2023 1 commit
  11. 03 Feb, 2023 1 commit
  12. 17 Jan, 2023 1 commit
  13. 16 Jan, 2023 1 commit
  14. 14 Jan, 2023 1 commit
  15. 13 Jan, 2023 1 commit
  16. 12 Jan, 2023 1 commit
    • Qianfeng's avatar
      Remove including of cmath (#551) · a17b0414
      Qianfeng authored
      * Let cmath included when compiling host codes in math_v2.hpp
      
      * Remove including of cmath in device_base.hpp and device_permute.hpp
      a17b0414
  17. 27 Dec, 2022 5 commits
  18. 07 Dec, 2022 1 commit
  19. 02 Dec, 2022 1 commit
  20. 15 Nov, 2022 1 commit
    • guangzlu's avatar
      Add BF16 tests for batched_gemm_softmax_gemm_permute (#504) · 4c4c7328
      guangzlu authored
      
      
      * fixed bug in softmax reference & add bf16 examples for batched_gemm_scale_softmax_gemm
      
      * added bf16 tests for batched_gemm_softmax_gemm_permute
      
      * changed format of device_batched_gemm_softmax_gemm_permute_xdl_cshuffle_bf16_bf16_bf16_bf16_gmk_gnk_gno_gmo_instance.cpp
      
      * changed format device_batched_gemm_softmax_gemm_permute_xdl_cshuffle_bf16_bf16_bf16_bf16_gmk_gnk_gno_gmo_instance.cpp
      
      * aligned annotations
      
      * modified CMakeLists for examples
      
      * add common example code of fp16/bf16 version for batched_gemm_scale_softmax_gemm_xdl
      
      * use macro to control the instances
      
      * added macro control into instances
      
      * clang-format some files
      
      * changed error tolerance for bf16
      
      * changed index for 10_elementwise_normalization
      
      * fixed xdlops code bug in amd_xdlops.hpp
      Co-authored-by: default avatarPo Yen Chen <PoYen.Chen@amd.com>
      4c4c7328
  21. 20 Sep, 2022 1 commit
    • Po Yen Chen's avatar
      Add 'Permute' device op & example (#408) · f584ab0c
      Po Yen Chen authored
      * Add example folder for 'DeviceElementwise'
      
      * Re-structure example files
      
      * Move common parts into common.hpp
      
      * Use more strict input
      
      * Add more helper methods in 'DeviceElementwise'
      
      * Use more specific method to write example
      
      * Allow specify problem through command line argument
      
      * Allow specify problem 'axes' through command line argument
      
      * Add check to template type argument
      
      * Add transpose_shape() to generalize shape permute
      
      * Generalize transpose utility functions
      
      * Use better name for tensor indices
      
      * Add checks in helper functions
      
      * Remove debug messages
      
      * Refine error message for check_err()
      
      * Generalize variable naming in example code
      
      * Add device op 'DevicePermute'
      
      This device op is clone of 'DeviceElementwise'
      
      * Use 'DevicePermute' device op in example
      
      * Remove 'elementwise' from identifiers
      
      * Remove 'elementwise' from file paths
      
      * Remove base class of 'DevicePermute'
      
      * Let 'DevicePermute' ...
      f584ab0c
  22. 19 Sep, 2022 2 commits
    • Anthony Chang's avatar
    • Shaojie WANG's avatar
      Conv bwd data multiple d (#404) · 27858374
      Shaojie WANG authored
      
      
      * init commit of convnd bwd data
      
      * begin compiling example
      
      * have a first version that produce a right result
      
      * refine device level launch kernel code
      
      * add more instances in example and get right results
      
      * clang-format
      
      * format example file
      
      * add more instances
      
      * fix instances
      
      * adding conv_bwd_data multile_d
      
      * adding conv_bwd_data multile_d
      
      * adding conv_bwd multiple d
      
      * adding conv_bwd multiple d
      
      * adding conv_bwd multiple d
      
      * refactor
      
      * refactor
      
      * adding conv bwd data multiple d
      
      * adding conv bwd data multiple d
      
      * adding conv bwd data multiple d
      
      * adding conv bwd data multiple d
      
      * adding conv bwd data multiple d
      
      * adding conv bwd data multiple d
      
      * adding conv bwd data multiple d
      
      * refactor
      
      * update conv fwd's bias impl
      
      * refactor
      
      * reorg file
      
      * clean up cmake
      
      * clean
      
      * clean
      
      * clean
      Co-authored-by: default avatarChao Liu <lc.roy86@gmail.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      27858374
  23. 09 Sep, 2022 1 commit
  24. 30 Aug, 2022 2 commits
    • Adam Osewski's avatar
      Gemm reduce examples int4/int8/fp32/bf16 (#368) · d00e6115
      Adam Osewski authored
      
      
      * GEMM + Reduce max fp16+fp32
      
      * GEmm + Max bf16 + int8
      
      * Refactor common definitions.
      
      * Refactor common func of mean meansquare example.
      
      * More examples for mean meansquare.
      
      * Update int8 examples and skip them cause of random errors.
      
      * Int4 examples.
      
      * Fix examples for max int4/8
      
      * Tensor conversion for int4 input data for mean meansquare example.
      
      * Remove int4 mean_meansquare example
      
      * Fix int8 mean_meansquare example.
      
      -All ReductionAccData and R<N>DataType have to be F32. The INT32 data
      type is giving wrong results.
      
      * Guard int4 with ifdef
      
      * Change int8 example to add_addsquare due to div rounding err.
      
      * Clang format
      
      * Change the return type of common function.
      
      * Get back int8 example with division.
      
      * Remove int8 mean meansquare.
      
      * Use proper cast for BF16 data type.
      
      * Use ck::literals.
      
      * Use proper data type for host tensors & reference.
      
      - Use ReduceAccDataType for reference gemm output data type.
      - Cast host reference output tensor to EDataType
      - Fix ifdefs for int4.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      d00e6115
    • Shaojie WANG's avatar
      Padding for attention: bmm+scale+softmax+bmm kernel (#385) · 45adb736
      Shaojie WANG authored
      
      
      * add padding algo for bmm+scale+softmax+bmm. Version for verification
      
      * remove verification code
      
      * remove comments
      
      * add padded bmm scale softmax bmm example
      
      * format
      
      * refactor
      
      * add comments for usages of padding bmm+scale+softmax+bmm
      Co-authored-by: default avatarChao Liu <lc.roy86@gmail.com>
      45adb736
  25. 25 Aug, 2022 1 commit
  26. 24 Aug, 2022 1 commit
  27. 23 Aug, 2022 1 commit
  28. 18 Aug, 2022 1 commit
  29. 13 Aug, 2022 4 commits
    • rocking5566's avatar
      Layernorm welford (#346) · 0bd6b842
      rocking5566 authored
      
      
      * Add threadwise and blockwise welford
      
      * Rename gridwise op, prepare to add welford version
      
      * implement welford and integrate welford into layernorm
      
      * Take care of tail loop
      
      * Fix buf when ThreadSliceK > 1
      
      * Fix bug of merging of two empty set
      
      * Rename clip to clamp
      
      * 1. Fix type of count
      2. Remove useless static_assert
      
      * Do not inherit Reduction::Argument
      
      * [What] replace __syncthreads() with block_sync_lds()
      [Why] __syncthreads might wait both lgkmcnt(0) and vmcnt(0)
      
      * Add y stride
      
      * Rename.
      DeviceLayernorm -> DeviceLayernormImpl
      DeviceNormalization2 -> DeviceLayernorm
      
      * Move literal ""_uz & ""_zu into namespace 'literals'
      
      * Move namespace 'literals' as 'ck::literals'
      Co-authored-by: default avatarPo-Yen, Chen <PoYen.Chen@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      0bd6b842
    • Anthony Chang's avatar
      Fused GEMM+GEMM (#351) · c20a75b0
      Anthony Chang authored
      
      
      * initial stub for gemm_gemm_xdl_cshuffle
      
      * set up example code
      
      * compiles
      
      * prevent integer overflow
      
      * harmonize interface between ref_gemm and ref_batched_gemm
      
      * batched_gemm_gemm
      
      * fix example
      
      * host tensor gen: diagonal pattern in lowest two-dimensions only
      
      * make c descriptors containing only integral constants
      
      * clean up
      
      * add BlockwiseGemmXdlops_v2 while exploring an unified approach
      
      * implement proper interface
      
      * tidy up example
      
      * fix compilation warnings
      
      * coarsely controlled 2nd gemm padding
      
      * remove rocm-cmake's hard requirement for certain revision
      
      * clang-format
      
      * resolve merge conflict
      
      * fix compilation error on gfx10
      
      * adds acc0 elementwise op to interface
      
      * add gemm_gemm instances and tests
      
      * avoid LDS data hazard
      
      * fix build
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      c20a75b0
    • ltqin's avatar
      Skip lds of b matrix (#326) · 10b3278b
      ltqin authored
      * start
      
      * read for gridwise gemm
      
      * add MakeBGridDescriptor_K0_N0_N1_N2_N3_K1
      
      * add thread  copy desc and register buffer
      
      * add K0PerBlock dim
      
      * add read global data
      
      * finish gridwise gemm
      
      * finish blockwise gemm
      
      * add print data
      
      * add smallest config
      
      * add compare code for gridwis gemm
      
      * fix NXdlPerWave
      
      * fix k0perthread and gridewis gemm main loop
      
      * remove b matrix lds alloc
      
      * fix name
      
      * add test code
      
      * create b_grid_desc_k0_k1_k2_n0_n1_n2_n3_k3 from parameter
      
      * add double register
      
      * modify b_thread_desc_
      
      * add float
      
      * fp16 tag
      
      * add tail for pipeline
      
      * finish main loop
      
      * optimize main loop
      
      * start clear gridwise gemm
      
      * clear code
      
      * clear redundant code
      
      * change file name
      
      * change file name
      
      * fix bug after merge develop
      
      * fix input parameters
      
      * using MultiK0 control b load data loop
      
      * fix some config
      
      * 4 buffer
      
      * fix bug
      
      * one can use
      
      * change read order
      
      * change buffer array to tuple
      
      * change to 8 buffer
      
      * interleave buffer load
      
      * change to 16
      
      * read 8 buffer
      
      * add data buffer to template
      
      * fix after merge develop(head file)
      
      * format
      
      * change to 4 buffer
      
      * remove unnecessary lambda fun
      10b3278b
    • Anthony Chang's avatar
      Fused attention (#345) · cac014f1
      Anthony Chang authored
      
      
      * initial stub for gemm_gemm_xdl_cshuffle
      
      * set up example code
      
      * compiles
      
      * prevent integer overflow
      
      * harmonize interface between ref_gemm and ref_batched_gemm
      
      * batched_gemm_gemm
      
      * fix example
      
      * host tensor gen: diagonal pattern in lowest two-dimensions only
      
      * make c descriptors containing only integral constants
      
      * clean up
      
      * add BlockwiseGemmXdlops_v2 while exploring an unified approach
      
      * implement proper interface
      
      * tidy up example
      
      * fix compilation warnings
      
      * coarsely controlled 2nd gemm padding
      
      * remove rocm-cmake's hard requirement for certain revision
      
      * clang-format
      
      * resolve merge conflict
      
      * fix compilation error on gfx10
      
      * adds acc0 elementwise op to interface
      
      * attention host validation
      
      * add blockwsie softmax v1
      
      * iteratively update softmax+gemm
      
      * transpose both gemm0 and gemm1 xdl output so as to avoid broadcasting softmax max/sum
      
      * add init method for easier debugging
      
      * do away with manual thread cluster calculation
      
      * generalize blockwise softmax interface
      
      * row-wise softmax sum & max
      
      * format
      
      * rename to DeviceBatchedGemmSoftmaxGemm
      
      * add gemm_softmax_gemm instances and tests
      
      * comment
      Co-authored-by: default avatarltqin <letao.qin@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      cac014f1
  30. 11 Aug, 2022 1 commit
    • Po Yen Chen's avatar
      Add examples for GEMM + AddAddFastGelu (data type: int8, bf16, fp32) (#340) · 68b61504
      Po Yen Chen authored
      * Add always_false<> util to delay symbol resolution
      
      * Use always_false<> to prevent trying instantiate unwanted method
      
      * Add new specializations of AddAddFastGelu::operator() method
      
      * Add GEMM + AddAddFastGelu examples for data types: int8, bf16, fp32
      
      * Use floating point literal to simplify code
      
      * Remove unnecessary capture in lambda expressions
      
      * Extract fast GeLU calculation as standalone method
      
      * Mark methods as 'constexpr'
      
      * Add constraint for HostTensorDescriptor templated ctors
      
      * Simplify HostTensorDescriptor ctor calls
      
      * Add C++23 std::size_t literal suffix
      
      * Use _uz suffix to shorten example code
      
      * Remove unnecessary conversion to std::array<>
      
      * Re-order include directives
      
      * Remove C-style casting by literal suffix
      
      * Remove unnecessary statements in main()
      
      * Remove unused type parameter of always_false<>
      
      * Remove unused include directive
      
      * Exit main() by returning meaningful value
      
      * Use 'if constexpr' to switch example flow
      
      * Use std::is_same_v<> to shorten example code
      
      * Add 'inline' specifier to literal functions
      
      * Unify output methods in example
      
      * Move common codes into .inc file
      
      * Add type check in type_convert<>()
      
      * Add type_convert<float>() before computation
      
      * Merge AddAddFastGelu method specializations
      
      * Remove always_false<>
      
      * Add constraint to AddAddFastGelu::operator() parameter types
      68b61504
  31. 02 Aug, 2022 1 commit
    • Adam Osewski's avatar
      CGEMM examples bf16, fp32, int8 (#332) · fb0dc358
      Adam Osewski authored
      
      
      * Add int8 specialization for elementwise Add and Subtract.
      
      * CGEMM examples bf16, fp32, int8
      
      * Add convert reference output to CDataType.
      
      * Skip BF16 data type during testing.
      
      * Lower K value to get rid of accumulation error.
      
      * Fix merge artifact.
      
      * Fix changed function name: GetElementSpaceSize()
      
      * Fix merge artifact.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      fb0dc358