- 13 Feb, 2024 1 commit
-
-
Bartłomiej Kocot authored
-
- 12 Feb, 2024 1 commit
-
-
zjing14 authored
* add delayed cvt * extend fp16 gemm_splitk instances for fp8_fp16 gemm * add f8 example * add 128 kperblk instances for fp8 * add kpb128 instance * added more instances into kpb128 * clean code * clean code * fix * fix * fixed * Update example/35_splitK_gemm/splitK_gemm_xdl_fp16_fp8.cpp Co-authored-by:
Bartłomiej Kocot <barkocot@amd.com> * Update include/ck/tensor_operation/gpu/thread/threadwise_tensor_slice_transfer.hpp Co-authored-by:
Bartłomiej Kocot <barkocot@amd.com> * Update library/src/tensor_operation_instance/gpu/gemm_splitk/device_gemm_xdl_splitk_f16_fp8_f16_mk_nk_mn_kpb128_instance.cpp Co-authored-by:
Bartłomiej Kocot <barkocot@amd.com> --------- Co-authored-by:
Jing Zhang <jizha@amd.com> Co-authored-by:
Bartłomiej Kocot <barkocot@amd.com>
-
- 09 Feb, 2024 1 commit
-
-
zjing14 authored
* add generic instances * clean code
-
- 07 Feb, 2024 3 commits
-
-
Illia Silin authored
* split up splitk-gemm instances * clean up some unused variables * split the mk_kn_mn interwave splitk-gemm instances * split up f16_f16_f16 mk_nk_mn splitk gemm instances * fix clang format * fix function names * fix typo * split up the 2 largest fp16*fp8 splitk gemm instances * get rid of unused variables * split up the largest splitk-gemm fp8*fp16 instance file * split up the instances for xdl fp8 gemms * split the headers for f16 and i8 for wmmma convolution instances
-
jakpiase authored
-
Bartlomiej Wroblewski authored
* WIP: Implement direct loads split-K GEMM kernel * Clean the review --------- Co-authored-by:
Adam Osewski <19374865+aosewski@users.noreply.github.com> Co-authored-by:
Bartłomiej Kocot <barkocot@amd.com>
-
- 25 Jan, 2024 1 commit
-
-
rocking authored
* Add layernorm bwd gamma beta external api * Add groupnorm external api * Add layernorm bwd gamma beta profiler * Add groupnorm bwd gamma beta ckProfiler * Add layernorm & groupnorm bwd gamma beta test * Fix groupnorm bwd gamma beta profiler bug * Layernorm bwd weight client example * Groupnorm bwd weight client example * clang format * Remove useless header * Let inv_std be positive * Rename to num_bytes and move this calculation outside the loop
-
- 19 Jan, 2024 1 commit
-
-
Haocong WANG authored
* Optimize GEMM on MI200/300: 1. Add new blockwise gemm pipeline 2. Add irregular splitk intances * clang format + typo fix * Fix a bug
-
- 02 Jan, 2024 1 commit
-
-
Bartłomiej Kocot authored
This reverts commit ff24b537.
-
- 20 Dec, 2023 1 commit
-
-
Artur Wojcik authored
* enable compilation of INSTANCES_ONLY for Windows * suppress ROCMChecks warnings on GoogleTests * suppress -Wfloat-equal warning on GoogleTests --------- Co-authored-by:Illia Silin <98187287+illsilin@users.noreply.github.com>
-
- 19 Dec, 2023 1 commit
-
-
arai713 authored
* adding files for F32 example * adding functioning implementation with scalar multiplication and unary operator support * added fp 16 type check in unary square * updating scalar multiplication as an operator * functioning version with scalar operator * changing strides for col major * updated column major implementation * working column major implementation * cleaned up comments, rearranged/renamed files * small edits to 3d transpose profiler * adding test/profiler/instance files for hipTensor permute unit test * added more test instances * cleaned up errors, randomized input tensor, added more instances * turned off time printouts * removed conflicting transpose profiler * rearranged some files
-
- 18 Dec, 2023 2 commits
-
-
rocking authored
* rename folder * Add type string * Remove typo * Add deviceOp to backward x * Add comment to describe the behavior of backward normalization * Add kernel function, prepare to implement * implement generic kernel * Check vector size * Add sweep once pipeline for small reduce size * Fix bug of KRaw_ error * Fix bug of dx stride * sanity check for mean and rstd * backward x for groupnorm * Add bwd x instance * add layernorm 2d bwd gamma beta instances * Change save mean var type from f32 to f16 in f16 mode * Change the example to f16 * Add groupnorm bwd gamma beta instance * Add groupnorm bwd x instance * Fix naming * Add layernorm bwd x ckprofiler * Add groupnorm bwd x profiler * clang format * Rename bwd x to bwd data * Fix bug of verification in profiler * Add test of layernorm and groupnorm bwd data * Add missing cmake * Add layernorm2d bwd data * rename fwd example * Add groupnorm client example * Fix typo. replace Invarient with Invariant * Add checking before running the best instance
-
Bartlomiej Wroblewski authored
This PR optimizes fp16 instances of direct load GEMM kernel introduced in #999 and #1052. Measured the performance of new instances on CDNA2 GPU and compared it against the performance of the best non-direct-load GEMM instances. Used 76 different GEMM problems. On average, this change improves the performance of the tested problems by 47%. For cases known as latency-bound, the speedup is around 126%.
-
- 12 Dec, 2023 1 commit
-
-
Illia Silin authored
* disabling some fp8 gemm instances to reduce build time * disable fp8 gemm instances to reduce build time * remove the unused variable * build fp8 gemm default and padded instances separately * fix include pathsc
-
- 08 Dec, 2023 3 commits
-
-
Illia Silin authored
-
Nicolas Macchioni authored
-
Bartłomiej Kocot authored
* Support broadcast for bias in grouped conv fwd * Fix comment * Comment fixes * Remove GK layout
-
- 05 Dec, 2023 1 commit
-
-
Jun Liu authored
-
- 03 Dec, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
This PR introduces support for double buffering in LDS into GEMM kernels that use direct load instructions. Direct loads now use inline asm instead of intrinsics. Usage of intrinsics results in compiler adding additional waitcnt instructions what breaks possible load/compute overlap in case of double buffering. Usage of inline asm results in the need to use sched_barrier in order to make sure that compiler cannot incorrectly reschedule instructions since it does not know the data dependencies between global->LDS and LDS->registers.
-
- 28 Nov, 2023 1 commit
-
-
Illia Silin authored
* spolit the static library into several * update lib paths and fix client example * do not use device_mha_operarions for client examples * use appropriate libs to link to client examples * remove the gpu/transpose path from the list * try fixing clinet examples 3,4,9 * add necessary libs for client examples * fix the layernorm client example * fix the client examples 23 and 24 * fix typo * add interface library and refresh clang format
-
- 25 Nov, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
* Add basic support for direct loads from global to LDS * Clean the code and comments * Add support for fp16 * Add comments * Add check for thread cluster lengths * Align non-direct-load fp16 example * Small fixes * Extend IsSupported to check for supported GPU gens * Build examples only on the supported HW * Do not throw when instance not supported in 04 example * Review: Apply review suggestions * Review: small fix * Review: small fix
-
- 17 Nov, 2023 1 commit
-
-
zjing14 authored
* improve 4k gemm perf * add f8 instances * format --------- Co-authored-by:Jing Zhang <jizha@amd.com>
-
- 14 Nov, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Introduce multiABD api and deprecate multiD * Replace multiD with multiABD * Mark structures as deprecated * Change doxygen deprecated to note to avoid warnings
-
- 11 Nov, 2023 1 commit
-
-
zjing14 authored
* add more instances for bfp16 * reduce the gemm input values to prevent round-off errors --------- Co-authored-by:
Jing Zhang <jizha@amd.com> Co-authored-by:
illsilin <Illia.Silin@amd.com>
-
- 10 Nov, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Support multi AB for grouped conv fwd xdl * Add instances * Add client example * Add example * Add interface test * Minor fixes Minor fixes Minor fixes * Comment fixes * Fixes * Reference fix * Test xdl fixes * Improve multi_ab interface test
-
- 09 Nov, 2023 2 commits
-
-
arai713 authored
* added working example for 5D input using 1D kernel * example with 5D input tensor and 2d kernel - not working: issues with arguments * added updated version of 3d device op - changed descriptors/dims * added example file to check kernel * fixed descriptor and isSupportedArgument stride problem * added and modified kernel for 3d - updated tids/loop * adding some more 5d example files * fixed some issues * changes made for testing * working version: fixed error in stride for A, still a bit inefficient * cleaned up formatting/comments * updating formatting * more formatting fixes * fixing cmake, adding back gpu targets in cmake script * adding client example * added instances for client example * fixed errors in client example * implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp * removed extra files * minor formatting and naming fixes * adding test files and profiler * fixing minor error * minor fix * removed unneccesary comments, renamed files * updated instance list for client example, added different layout example * removing instances * fixed error in instance generation * remove comments * update profiler and client example tensor layouts * fixed errors in test/profiler * updated vector dim access to enable vector load * updated test/profiler files * updated example with 1d kernel * updating profiler * renamed files --------- Co-authored-by:Jing Zhang <jizha@amd.com>
-
rocking authored
* Rename folder * Add layernorm 4d fwd example * Rename original layernorm example * Add layernorm 4d f16 test * Add layernorm4d_fwd client example * Support layernorm4D in ckProfiler * Rename groupnorm to groupnorm fwd in example * Rename layernorm and group fwd in test * Rename normalization to normalization_fwd (instances) * Add fwd to DeviceNormalization * Rename external api header * Rename folder, because we can also add bwd in this folder * Add fwd in layernorm and groupnorm (profiler * Fix compile error --------- Co-authored-by:Po Yen Chen <PoYen.Chen@amd.com>
-
- 07 Nov, 2023 1 commit
-
-
zjing14 authored
* improve kpad * more tuning parameters * f16_f8_fp16 * cut test time * add f16_f8_fp16 * add f16_f8_f16 * testing instances for skinny cases * format * clean * add fp16_f8_fp16 * clang-format * add grouped gemm instalces * fixed profile grouped_gemm * clean * clean * clean * clean * clean * add missing instance func * fixed inferface --------- Co-authored-by:
Jing Zhang <jizha@amd.com> Co-authored-by:
root <root@sh5-1e707-rc06-38.mkm.dcgpu>
-
- 02 Nov, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
* Add support for mixed precision in contraction scale and bilinear (#936) * Extract common functionality to separate files * Reference contraction: Remove incorrect consts from type_converts * Reference contraction: Add missing type_convert for dst value * Reference contraction: Fix incorrect order of B matrix dimensions * Add support for mixed precision in contraction scale and bilinear * Move using statements from instances to a common file * Move using statements from examples to a common file * Fix the order of B matrix dimensions across examples and profiler * Fix the computation of error threshold * Make ComputeDataType an optional argument * Include possible DataType -> ComputeDataType casting error in the threshold * Remove commented code * Make the ComputeDataType an optional argument in instance --------- Co-authored-by:Illia Silin <98187287+illsilin@users.noreply.github.com>
-
- 01 Nov, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Add ScaleAddScaleAddRelu post op for conv fwd * Fixes * Fix instance file name * Minor fix
-
- 31 Oct, 2023 2 commits
-
-
Po Yen Chen authored
* Disable the SLP vectorizer to prevent unnecessary wait * Add comment to the reason of adding flag * Fix wording
-
Bartłomiej Kocot authored
* Add support for groups in Img2Col/Col2Img * Fix interface test * Fix interface test G to N * Improve performance * Change gemm layout to 3d * Fixes
-
- 23 Oct, 2023 2 commits
-
-
zjing14 authored
* add mnk padding for fp8 * add padding for row_col layout * added padding for fp32 --------- Co-authored-by:Jing Zhang <jizha@amd.com>
-
Alan Turner authored
-
- 21 Oct, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Fix instances dtype check * Fix source dtypes seletor for examples and tests * Sync with new cmakefile changes * Remove not needed ifdefs * Remove not needed ifdefs
-
- 19 Oct, 2023 1 commit
-
-
Po Yen Chen authored
* Avoid force setting ENABLE_PIPELINE_V2_OPT to OFF * Remove compilation option variable MAX_ILP_OPTS
-
- 18 Oct, 2023 2 commits
-
-
rocking authored
* save mean and inverse std in normalization * Save mean and inverse std in splitK * Vector save mean and inv std * Modify instance for save mean and std * simplify the layernorm example * Save mean and std in groupnorm example * Save mean and inv std in ckProfiler and test * Remove compute data type from base class * Save mean and inv std in client example * Add changelog * clang format * Fix compile error * Refine naming * Avoid error in bf16 * revert changelog
-
zjing14 authored
* Add a condition to build fp8 instances * simplified buffer_load/store * add bfp8/fp8 * fixed * remove all f8/bf8 condition include folder * fixed cmake conditions * fixed DTYPES=fp16/bfp16 * fix * fixed buffer_load * fixed buffer_store * fix * clean example cmake files * fixed ci * fixed cit --------- Co-authored-by:
Rostyslav Geyyer <rosty.geyyer@amd.com> Co-authored-by:
Jing Zhang <jizha@amd.com>
-
- 17 Oct, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Add grouped conv bwd weight wmma * Update README, changelog, profiler * Minor fixes * Fix grouped conv bwd wei dl kernel * Minor fixes * Minor stylistic fixes
-
- 13 Oct, 2023 1 commit
-
-
Rostyslav Geyyer authored
* Add ComputeType * Update for compatibility * Add instances * Update profiler api
-