1. 18 Aug, 2022 1 commit
  2. 17 Aug, 2022 1 commit
  3. 15 Aug, 2022 2 commits
    • Anthony Chang's avatar
      Hotfix LDS data hazard in fused attention (#360) · c961ce92
      Anthony Chang authored
      * avoid LDS data hazard in gemm_softmax_gemm pipeline
      
      * trivial refactors
      
      * comments
      
      * shrink blockwise gemm v2 thread buffer size
      
      * reclaim A block lds space when during 2nd gemm
      
      * amend
      
      * amend
      c961ce92
    • Qianfeng's avatar
      Batchnorm-forward and Batchnorm-infer Implemented using generic kernels (#320) · 53ea4713
      Qianfeng authored
      * Implement multiple-reduction in one kernel (kernels, device ops, examples)
      
      * Add generic elementwise kernel and device interface
      
      * Add generator for normal-distributed data initialization
      
      * Add host refer implementation of batchnorm-forward and batchnorm-infer
      
      * Add examples for implementing batchnorm-forward and batchnorm-infer using generic kernels
      
      * Remove un-needed including in batchnorm example
      
      * Renaming generic_elementwise to elementiwise in kernel and device classes/functions
      
      * Change in gemm_layernorm examples to use DeviceElementwise instead of Device5AryElementwise
      
      * Change in exampe 19_binary_elementwise to use DeviceElementwise instead of DeviceBinaryElementwise
      
      * Change in device_cgemm_4gemm_xdl_cshuffle.hpp to use kernel_elementwise instead of kernel_binary_elementwise
      
      * Add DeviceElementwiseBase and use it in device_normalize_instance.cpp
      
      * Removing and renaming files
      
      * Update to synchronize gemm_layernorm client example to the generic element-wise device op API
      
      * Update to synchronize with the latest headers directory and HostTensorDescriptor interface renaming
      
      * Merge two static member functions in device_elementwise.hpp
      
      * Remove unary_elementwise_1d kernel and device
      53ea4713
  4. 13 Aug, 2022 8 commits
    • Chao Liu's avatar
      fix build issue (#357) · 5ee30459
      Chao Liu authored
      * fix build
      
      * excludeexample_gemm_max_xdl_fp16 from testing due to random failure on gfx908
      5ee30459
    • cloudhan's avatar
      Change all device operations to use add_instance_library (#338) · fb1cbf02
      cloudhan authored
      
      
      * Change all device operations to use add_instance_library to avoid duplicated cmake configuration.
      
      * update DeviceMem
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      fb1cbf02
    • rocking5566's avatar
      Layernorm welford (#346) · 0bd6b842
      rocking5566 authored
      
      
      * Add threadwise and blockwise welford
      
      * Rename gridwise op, prepare to add welford version
      
      * implement welford and integrate welford into layernorm
      
      * Take care of tail loop
      
      * Fix buf when ThreadSliceK > 1
      
      * Fix bug of merging of two empty set
      
      * Rename clip to clamp
      
      * 1. Fix type of count
      2. Remove useless static_assert
      
      * Do not inherit Reduction::Argument
      
      * [What] replace __syncthreads() with block_sync_lds()
      [Why] __syncthreads might wait both lgkmcnt(0) and vmcnt(0)
      
      * Add y stride
      
      * Rename.
      DeviceLayernorm -> DeviceLayernormImpl
      DeviceNormalization2 -> DeviceLayernorm
      
      * Move literal ""_uz & ""_zu into namespace 'literals'
      
      * Move namespace 'literals' as 'ck::literals'
      Co-authored-by: default avatarPo-Yen, Chen <PoYen.Chen@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      0bd6b842
    • Anthony Chang's avatar
      Fused GEMM+GEMM (#351) · c20a75b0
      Anthony Chang authored
      
      
      * initial stub for gemm_gemm_xdl_cshuffle
      
      * set up example code
      
      * compiles
      
      * prevent integer overflow
      
      * harmonize interface between ref_gemm and ref_batched_gemm
      
      * batched_gemm_gemm
      
      * fix example
      
      * host tensor gen: diagonal pattern in lowest two-dimensions only
      
      * make c descriptors containing only integral constants
      
      * clean up
      
      * add BlockwiseGemmXdlops_v2 while exploring an unified approach
      
      * implement proper interface
      
      * tidy up example
      
      * fix compilation warnings
      
      * coarsely controlled 2nd gemm padding
      
      * remove rocm-cmake's hard requirement for certain revision
      
      * clang-format
      
      * resolve merge conflict
      
      * fix compilation error on gfx10
      
      * adds acc0 elementwise op to interface
      
      * add gemm_gemm instances and tests
      
      * avoid LDS data hazard
      
      * fix build
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      c20a75b0
    • ltqin's avatar
      Skip lds of b matrix (#326) · 10b3278b
      ltqin authored
      * start
      
      * read for gridwise gemm
      
      * add MakeBGridDescriptor_K0_N0_N1_N2_N3_K1
      
      * add thread  copy desc and register buffer
      
      * add K0PerBlock dim
      
      * add read global data
      
      * finish gridwise gemm
      
      * finish blockwise gemm
      
      * add print data
      
      * add smallest config
      
      * add compare code for gridwis gemm
      
      * fix NXdlPerWave
      
      * fix k0perthread and gridewis gemm main loop
      
      * remove b matrix lds alloc
      
      * fix name
      
      * add test code
      
      * create b_grid_desc_k0_k1_k2_n0_n1_n2_n3_k3 from parameter
      
      * add double register
      
      * modify b_thread_desc_
      
      * add float
      
      * fp16 tag
      
      * add tail for pipeline
      
      * finish main loop
      
      * optimize main loop
      
      * start clear gridwise gemm
      
      * clear code
      
      * clear redundant code
      
      * change file name
      
      * change file name
      
      * fix bug after merge develop
      
      * fix input parameters
      
      * using MultiK0 control b load data loop
      
      * fix some config
      
      * 4 buffer
      
      * fix bug
      
      * one can use
      
      * change read order
      
      * change buffer array to tuple
      
      * change to 8 buffer
      
      * interleave buffer load
      
      * change to 16
      
      * read 8 buffer
      
      * add data buffer to template
      
      * fix after merge develop(head file)
      
      * format
      
      * change to 4 buffer
      
      * remove unnecessary lambda fun
      10b3278b
    • Qianfeng's avatar
      Add examples for reduction fp16/fp32/bp16/int8/fp64 for 3d/4d/5d (#342) · 14932e8d
      Qianfeng authored
      * Update the reduce_blockwise example to support user specified data type and input+reducing dimensions
      
      * Add examples for using reduce_multiblock_atomic_add
      
      * Add more running examples to the default command-line
      
      * Remove un-necessary header including
      
      * Update to the example README.md
      14932e8d
    • rocking5566's avatar
      Gemm multiple d multiple r (#335) · 6c3c06bf
      rocking5566 authored
      * Imitate XXX_gemm_multiple_d, add XXX_gemm_multiple_d_multiple_r for gemm + reduction
      
      * Implement run of kernel
      
      * Add example
      
      * Fix parameter of typo
      
      * Rewrite the reduceMax example
      
      * Rewrite the reduceMean + reduceMeanSquare example
      
      * Refine naming
      
      * Refine folder name
      
      * refine naming
      
      * Rewrite the gemm + bias + relu + add + layernorm example
      
      * Rewrite the gemm + layernorm example
      
      * clang-format
      
      * Fix bug if sync lds
      
      * Fix compile error
      6c3c06bf
    • Anthony Chang's avatar
      Fused attention (#345) · cac014f1
      Anthony Chang authored
      
      
      * initial stub for gemm_gemm_xdl_cshuffle
      
      * set up example code
      
      * compiles
      
      * prevent integer overflow
      
      * harmonize interface between ref_gemm and ref_batched_gemm
      
      * batched_gemm_gemm
      
      * fix example
      
      * host tensor gen: diagonal pattern in lowest two-dimensions only
      
      * make c descriptors containing only integral constants
      
      * clean up
      
      * add BlockwiseGemmXdlops_v2 while exploring an unified approach
      
      * implement proper interface
      
      * tidy up example
      
      * fix compilation warnings
      
      * coarsely controlled 2nd gemm padding
      
      * remove rocm-cmake's hard requirement for certain revision
      
      * clang-format
      
      * resolve merge conflict
      
      * fix compilation error on gfx10
      
      * adds acc0 elementwise op to interface
      
      * attention host validation
      
      * add blockwsie softmax v1
      
      * iteratively update softmax+gemm
      
      * transpose both gemm0 and gemm1 xdl output so as to avoid broadcasting softmax max/sum
      
      * add init method for easier debugging
      
      * do away with manual thread cluster calculation
      
      * generalize blockwise softmax interface
      
      * row-wise softmax sum & max
      
      * format
      
      * rename to DeviceBatchedGemmSoftmaxGemm
      
      * add gemm_softmax_gemm instances and tests
      
      * comment
      Co-authored-by: default avatarltqin <letao.qin@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      cac014f1
  5. 12 Aug, 2022 4 commits
  6. 11 Aug, 2022 2 commits
    • Po Yen Chen's avatar
      Add examples for GEMM + AddAddFastGelu (data type: int8, bf16, fp32) (#340) · 68b61504
      Po Yen Chen authored
      * Add always_false<> util to delay symbol resolution
      
      * Use always_false<> to prevent trying instantiate unwanted method
      
      * Add new specializations of AddAddFastGelu::operator() method
      
      * Add GEMM + AddAddFastGelu examples for data types: int8, bf16, fp32
      
      * Use floating point literal to simplify code
      
      * Remove unnecessary capture in lambda expressions
      
      * Extract fast GeLU calculation as standalone method
      
      * Mark methods as 'constexpr'
      
      * Add constraint for HostTensorDescriptor templated ctors
      
      * Simplify HostTensorDescriptor ctor calls
      
      * Add C++23 std::size_t literal suffix
      
      * Use _uz suffix to shorten example code
      
      * Remove unnecessary conversion to std::array<>
      
      * Re-order include directives
      
      * Remove C-style casting by literal suffix
      
      * Remove unnecessary statements in main()
      
      * Remove unused type parameter of always_false<>
      
      * Remove unused include directive
      
      * Exit main() by returning meaningful value
      
      * Use 'if constexpr' to switch example flow
      
      * Use std::is_same_v<> to shorten example code
      
      * Add 'inline' specifier to literal functions
      
      * Unify output methods in example
      
      * Move common codes into .inc file
      
      * Add type check in type_convert<>()
      
      * Add type_convert<float>() before computation
      
      * Merge AddAddFastGelu method specializations
      
      * Remove always_false<>
      
      * Add constraint to AddAddFastGelu::operator() parameter types
      68b61504
    • rocking5566's avatar
      ckProfiler for layernorm (#330) · fdfd7eb5
      rocking5566 authored
      * Refine parameter
      
      * Add base class for layernorm
      
      * Add layernorm instance
      
      * Add layernorm to ckProfiler
      
      * Remove redundant
      
      * Add verification
      
      * Fix compile error due to merge
      fdfd7eb5
  7. 10 Aug, 2022 1 commit
    • zjing14's avatar
      Add batched/grouped_gemm contraction deviceOps (#349) · e08d68d2
      zjing14 authored
      
      
      * convnd_fwd fp16 example
      
      * update example
      
      * update example
      
      * update instance
      
      * updating refernce conv
      
      * update reference conv
      
      * update conv fwd profiler
      
      * update conv 1d and 3d instance
      
      * update include path
      
      * clean
      
      * update profiler for conv bwd data and weight
      
      * update conv bwd weight
      
      * clean
      
      * update conv example
      
      * update profiler for conv bwd weight
      
      * update ckprofiler for conv bwd data
      
      * fix reference conv bwd data bug; update conv bwd data test
      
      * update examples
      
      * fix initialization issue
      
      * update test for conv fwd
      
      * clean
      
      * clean
      
      * remove test case too sensitive to error threshhold
      
      * fix test
      
      * clean
      
      * fix build
      
      * adding conv multiple d
      
      * adding conv multiple D
      
      * add matrix padder
      
      * add gemm padding to convnd
      
      * adding group conv
      
      * update gemm multi-d
      
      * refactor
      
      * refactor
      
      * refactor
      
      * clean
      
      * clean
      
      * refactor
      
      * refactor
      
      * reorg
      
      * add ds
      
      * add bias
      
      * clean
      
      * add G
      
      * adding group
      
      * adding group
      
      * adding group
      
      * update Tensor
      
      * clean
      
      * update example
      
      * update DeviceGemmMultipleD_Xdl_CShuffle
      
      * update conv bwd-data and bwd-weight
      
      * upate contraction example
      
      * update gemm and batch gemm with e permute
      
      * fix example build
      
      * instance for grouped conv1d
      
      * update example
      
      * adding group conv instance
      
      * update gemm bilinear instance
      
      * update gemm+add+add+fastgelu instance
      
      * update profiler
      
      * update profiler
      
      * update test
      
      * update test and client example
      
      * clean
      
      * add grouped conv into profiler
      
      * update profiler
      
      * clean
      
      * add test grouped conv, update all conv test to gtest
      
      * update test
      
      * change gemm_c_permute with contraction
      
      * add grouped_contraction
      
      * add contraction in group_gemm
      
      * add example of grouped_gemm with contraction
      
      * add example of grouped_contraction_bias_e_permute
      
      * clean
      
      * fixed ds
      
      * add m3n2 m2n3 examples into gemm_bias_e_permute
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      e08d68d2
  8. 08 Aug, 2022 1 commit
    • Illia Silin's avatar
      Fix QA, allow switching compiler versions, fix google test compilation error. (#348) · aba7fefc
      Illia Silin authored
      * allow selecting compiler version
      
      * fix typo
      
      * add Wno-deprecated flag for google tests
      
      * change git repo, fix qa log files names
      
      * change the git clone syntax
      
      * use Omkar's git credentials
      
      * try to use jenkins as git user
      
      * try using illsilin username for gerrit repo with ssh key
      
      * try new gerrit authorization
      
      * change ssh key syntax
      
      * try another way of passing ssh key to docker
      
      * add mount ssh in dockerfile
      
      * create .ssh folder
      
      * move ssh-keyscan to later
      
      * get rid of npm call
      
      * build first docker image on master
      
      * check the contents of the .ssh folder
      
      * try replacing omkars creds with gerrit creds
      
      * use open repo, clean up changes
      
      * get rid of ssh default argument
      aba7fefc
  9. 07 Aug, 2022 1 commit
  10. 03 Aug, 2022 1 commit
  11. 02 Aug, 2022 2 commits
    • Adam Osewski's avatar
      CGEMM examples bf16, fp32, int8 (#332) · fb0dc358
      Adam Osewski authored
      
      
      * Add int8 specialization for elementwise Add and Subtract.
      
      * CGEMM examples bf16, fp32, int8
      
      * Add convert reference output to CDataType.
      
      * Skip BF16 data type during testing.
      
      * Lower K value to get rid of accumulation error.
      
      * Fix merge artifact.
      
      * Fix changed function name: GetElementSpaceSize()
      
      * Fix merge artifact.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      fb0dc358
    • Illia Silin's avatar
      Run CI on MI100 nodes only, run daily QA on MI200 nodes. (#339) · 984b3722
      Illia Silin authored
      
      
      * turn on full qa only on gfx90a, use int initialization
      
      * change script syntax
      
      * update script parsing clinfo, throw exception if 0 devices
      
      * fix syntax
      
      * try using toBoolean for the QA conditions
      
      * run regular CI on MI100 only, use MI200 only for daily QA
      
      * evaluate when conditions before agent
      
      * launch QA on develop branch and update profile_reduce script
      
      * update test script
      
      * update script
      
      * remove false dependency from dockerfile
      
      * try removing rbuild completely
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarChao Liu <lc.roy86@gmail.com>
      984b3722
  12. 29 Jul, 2022 1 commit
    • Chao Liu's avatar
      Clean up conv example, Instances, profiler and test (#324) · 500fa995
      Chao Liu authored
      * convnd_fwd fp16 example
      
      * update example
      
      * update example
      
      * update instance
      
      * updating refernce conv
      
      * update reference conv
      
      * update conv fwd profiler
      
      * update conv 1d and 3d instance
      
      * update include path
      
      * clean
      
      * update profiler for conv bwd data and weight
      
      * update conv bwd weight
      
      * clean
      
      * update conv example
      
      * update profiler for conv bwd weight
      
      * update ckprofiler for conv bwd data
      
      * fix reference conv bwd data bug; update conv bwd data test
      
      * update examples
      
      * fix initialization issue
      
      * update test for conv fwd
      
      * clean
      
      * clean
      
      * remove test case too sensitive to error threshhold
      
      * fix test
      
      * clean
      
      * fix build
      
      * adding conv multiple d
      
      * adding conv multiple D
      
      * add matrix padder
      
      * add gemm padding to convnd
      
      * adding group conv
      
      * update gemm multi-d
      
      * refactor
      
      * refactor
      
      * refactor
      
      * clean
      
      * clean
      
      * refactor
      
      * refactor
      
      * reorg
      
      * add ds
      
      * add bias
      
      * clean
      
      * add G
      
      * adding group
      
      * adding group
      
      * adding group
      
      * update Tensor
      
      * clean
      
      * update example
      
      * update DeviceGemmMultipleD_Xdl_CShuffle
      
      * update conv bwd-data and bwd-weight
      
      * upate contraction example
      
      * update gemm and batch gemm with e permute
      
      * fix example build
      
      * instance for grouped conv1d
      
      * update example
      
      * adding group conv instance
      
      * update gemm bilinear instance
      
      * update gemm+add+add+fastgelu instance
      
      * update profiler
      
      * update profiler
      
      * update test
      
      * update test and client example
      
      * clean
      
      * add grouped conv into profiler
      
      * update profiler
      
      * clean
      
      * add test grouped conv, update all conv test to gtest
      
      * update test
      500fa995
  13. 22 Jul, 2022 2 commits
  14. 21 Jul, 2022 2 commits
    • Illia Silin's avatar
      Add full QA with verification option, few other changes. (#331) · d8415a96
      Illia Silin authored
      * add verify flag and update scripts
      
      * replace old check_error function with the new check_err
      
      * fix syntax
      
      * remove blank spaces
      
      * remove empty line
      
      * add check_err for tensors
      
      * fix syntax
      
      * replace tensors with vectors in check_err calls
      
      * fix syntax
      
      * remove blank spaces
      
      * fix syntax
      
      * add new line at end of file
      
      * disable conv2d_bwd_weight test, add gpu check
      
      * set check_gpu using export
      
      * check GPU using runShell
      
      * add definition of runShell
      
      * fix script syntax
      
      * reduce the number of threads, add full qa option
      
      * run processing scripts in bash
      
      * fix the branch and host names in performance scripts, add chronos
      
      * replace parameterizedCron with cron
      
      * archive the perf log files
      
      * try to fix git call
      
      * pass branch and host names as arguments into scripts
      
      * fix script arguments
      
      * fix script arguments
      
      * process results on master
      
      * fix pipeline
      
      * add definition of gpu_arch
      
      * run processing scripts in docker
      
      * fix the brackets
      
      * add agent master for the processing stage
      
      * get rid of show_node_info call on master
      
      * try using mici label instead of master, disable MI100 tests for now
      
      * fix syntax
      
      * simplify container for results processing
      
      * remove node(master) from the process_results stage
      
      * put all stages in original order
      
      * change the agent label from master to mici for gfx908
      d8415a96
    • zjing14's avatar
      Grouped Gemm device with multiD grid (#319) · 7959dad5
      zjing14 authored
      
      
      * replace gridwise_v2r3 with multiD
      
      * adjust parameters
      
      * add instances
      
      * fixed test_grouped_gemm
      
      * fix standalone softmax race condition around blockwise reduction
      
      * fixed ci
      
      * fixed comment: remove redundant workspace
      
      * use instanceFactory
      
      * add test layout
      
      * add empty Ds
      
      * add bias example
      
      * use array
      
      * sperate examples
      Co-authored-by: default avatarAnthony Chang <ac.chang@outlook.com>
      7959dad5
  15. 15 Jul, 2022 1 commit
  16. 13 Jul, 2022 3 commits
    • rocking5566's avatar
      Standalone layernorm (#315) · 7f216620
      rocking5566 authored
      
      
      * Implement layernorm kernel and deviceOp
      
      * verify gpu kernel with host code
      
      * 1. Separate gamma aand beta from affine
      2. Check if argument is valid
      
      * clean
      
      * Sync the naming
      
      * Support sweep once mode if we can put k dimension data inside one block
      
      * [What] Get length from upper length.
      [Why] if we get length directly, we may get length after padding.
      
      * We only use one block in K dimension.
      Hence, we can simplify the indexing of global R/W.
      
      * Use 1d descriptor for gamma and beta
      
      * Add accElementwiseOp
      
      * Extract layernorm host code
      
      * Support different YVectorDim in GridwiseLayernorm
      
      * Rename XSrcVectorDim to XYSrcVectorDim. Because we use same parameter in deviceOp
      
      * Gamma and beta can share the VGPR.
      
      * Add test for fp32 and fp16
      
      * Fix bug of concurrency and add test case which may fail orignally
      
      * Propagate NaN for layernorm
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      7f216620
    • Daming Feng's avatar
      minor fix in gemm client example (#328) · c5620ed0
      Daming Feng authored
      c5620ed0
    • Illia Silin's avatar
      Add switch between compilers, make 9110 compiler default, add full QA scripts. (#322) · 39acaea3
      Illia Silin authored
      * adding scripts for full perf test suite
      
      * uncomment the sql queries
      
      * fix typo and chmod a+x for scripts
      
      * dos2unix for all new scripts
      
      * disable verification in full performance test
      
      * fix reduction scripts, add gfrouped_gemm hotfix
      
      * fix the grouped_gemm hotfix and only run reduction for fp16
      
      * change compiler flag syntax
      
      * fix syntax
      
      * add predefinition of dockerArgs
      
      * avoid redefinitions of dockerArgs
      
      * add blank space at the end of dockerArgs
      
      * try to build with release compiler
      
      * adding spaces inside if condition
      
      * limit the number of threads for building 9110 compiler
      
      * change the way HIP_CLANG_PATH is set
      
      * remove the export command
      
      * change the conditional ENV syntax
      
      * set HIP_CLANG_PATH at docker run time
      
      * update scripts for full qa
      
      * enable the sql write query
      
      * fix typo
      
      * remove a comment from a script
      39acaea3
  17. 08 Jul, 2022 2 commits
    • Po Yen Chen's avatar
      GEMM pipeline v2 (#317) · 63914743
      Po Yen Chen authored
      
      
      * format
      
      * improving pipeline
      
      * fix typo
      
      * format
      
      * adding thread group
      
      * adding thread group
      
      * adding thread group
      
      * adding gemm pipeline
      
      * tweak
      
      * refactor
      
      * refactor
      
      * add missing type convert
      
      * refactor
      
      * refactor
      
      * refactor
      
      * clean
      
      * fix build
      
      * refactor
      
      * format
      
      * clean up
      
      * use remove_cvref_t
      
      * clean
      
      * use pipeline_v2 for gemm kernel
      
      * Remove inconsistent indent
      
      * Fix compilation errors due to incomplete merge process
      
      * Add missing include directives
      
      * Fix compilation errors in currently unused files
      
      * Add license in newly added files
      
      * Re-format touched files by clang-format-10
      
      * Fix wrong template argument count of DeviceGemm<>
      
      * Use language construct to choose between types
      
      * Use language construct to choose GEMM example instance
      
      * Fix compilation error due to interface change
      
      * Re-use type alias to avoid duplication
      
      * Unify type alias usage in source file
      
      * Only use v2 pipeline in one gridwise GEMM type
      
      * Remove no-longer used include directives
      
      * Add static_assert() to check pipeline type requirements
      
      * Revert "Add static_assert() to check pipeline type requirements"
      
      This reverts commit f0985f0a132671a1caaea92810c9f30dcf062bde.
      
      * clean
      
      * clean
      
      * clean
      
      * clean
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: wangshaojie6's avatarshaojiewang <wsjmessi@163.com>
      63914743
    • Shaojie WANG's avatar
      add conv1d/3d bwd weight instances (#318) · 763ca615
      Shaojie WANG authored
      * add conv1d/3d bwd weight instances
      
      * add profiler code
      763ca615
  18. 07 Jul, 2022 1 commit
    • Chao Liu's avatar
      N-D Tensor Contraction example, instance, and client example (#270) · 4fe9c393
      Chao Liu authored
      * adding contraction
      
      * add contraction example
      
      * update examle
      
      * update example
      
      * format
      
      * update readme
      
      * clean header
      
      * clean header
      
      * contraction with multiple D
      
      * rename
      
      * fix naming issue; add instances for contraction+bilinear
      
      * change assumed virtual layout of contraction; add client example
      
      * update example
      
      * update
      
      * contraction+scale
      
      * use type_convert
      
      * rename
      4fe9c393
  19. 06 Jul, 2022 1 commit
  20. 02 Jul, 2022 1 commit
    • Chao Liu's avatar
      Gemm+Bilinear (#316) · 9e4429f9
      Chao Liu authored
      * refactor
      
      * update example
      
      * update example
      
      * gemm bilinear
      
      * clean
      
      * update
      9e4429f9
  21. 01 Jul, 2022 2 commits
    • guangzlu's avatar
      modified grouped gemm addressing method (#307) · 8e374781
      guangzlu authored
      
      
      * modified grouped gemm addressing method
      
      * modified addressing method in device_grouped_gemm_xdl.hpp
      Co-authored-by: default avatarroot <root@dc-smc-13.amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      8e374781
    • Anthony Chang's avatar
      Single-kernel GEMM + layernorm (#263) · 63fd5da6
      Anthony Chang authored
      
      
      * dump lds content in appropriate precision type
      
      * add squared add reduction op; allows sq sum
      
      * initial stub from regular gemm impl
      
      * layernorm example code & host verification
      
      * initial layernorm implementation
      
      * tidy up
      
      * make C0 precision type consistent with C
      
      * clang-tidy and additional comments
      
      * tighten up example code
      
      * account for extra flops/bytes from normalization
      
      * clang-format
      
      * c0 bias/beta/gamma now have its own precision type
      
      * AccElemOp for gemm outputs prior to feeding to layernorm
      
      * update workgroup mapping
      
      * rename kernel template param to reflect its dual use
      
      * use LDS mem pool for reduction workspace
      
      * change cshuffle precision type to f16; clean up
      
      * clang-format
      
      * correct naming
      
      * explicit cast
      
      * fully implemented gemm + bias + activation + add + norm
      
      * activation in correct order
      
      * reflect reduction API's recent change
      
      * amend
      
      * clean up; add comment
      
      * keep up with recent changes in reduction API
      
      * format
      
      * resolve merge conflicts
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      63fd5da6