- 20 Dec, 2023 1 commit
-
-
Artur Wojcik authored
* enable compilation of INSTANCES_ONLY for Windows * suppress ROCMChecks warnings on GoogleTests * suppress -Wfloat-equal warning on GoogleTests --------- Co-authored-by:Illia Silin <98187287+illsilin@users.noreply.github.com>
-
- 19 Dec, 2023 1 commit
-
-
arai713 authored
* adding files for F32 example * adding functioning implementation with scalar multiplication and unary operator support * added fp 16 type check in unary square * updating scalar multiplication as an operator * functioning version with scalar operator * changing strides for col major * updated column major implementation * working column major implementation * cleaned up comments, rearranged/renamed files * small edits to 3d transpose profiler * adding test/profiler/instance files for hipTensor permute unit test * added more test instances * cleaned up errors, randomized input tensor, added more instances * turned off time printouts * removed conflicting transpose profiler * rearranged some files
-
- 18 Dec, 2023 2 commits
-
-
rocking authored
* rename folder * Add type string * Remove typo * Add deviceOp to backward x * Add comment to describe the behavior of backward normalization * Add kernel function, prepare to implement * implement generic kernel * Check vector size * Add sweep once pipeline for small reduce size * Fix bug of KRaw_ error * Fix bug of dx stride * sanity check for mean and rstd * backward x for groupnorm * Add bwd x instance * add layernorm 2d bwd gamma beta instances * Change save mean var type from f32 to f16 in f16 mode * Change the example to f16 * Add groupnorm bwd gamma beta instance * Add groupnorm bwd x instance * Fix naming * Add layernorm bwd x ckprofiler * Add groupnorm bwd x profiler * clang format * Rename bwd x to bwd data * Fix bug of verification in profiler * Add test of layernorm and groupnorm bwd data * Add missing cmake * Add layernorm2d bwd data * rename fwd example * Add groupnorm client example * Fix typo. replace Invarient with Invariant * Add checking before running the best instance
-
Bartlomiej Wroblewski authored
This PR optimizes fp16 instances of direct load GEMM kernel introduced in #999 and #1052. Measured the performance of new instances on CDNA2 GPU and compared it against the performance of the best non-direct-load GEMM instances. Used 76 different GEMM problems. On average, this change improves the performance of the tested problems by 47%. For cases known as latency-bound, the speedup is around 126%.
-
- 12 Dec, 2023 1 commit
-
-
Illia Silin authored
* disabling some fp8 gemm instances to reduce build time * disable fp8 gemm instances to reduce build time * remove the unused variable * build fp8 gemm default and padded instances separately * fix include pathsc
-
- 11 Dec, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
Current implementation of IsSupported method in contraction ops does not cover a lot of possible cases in which ScalarPerVector cannot really be used to read A, B or D, or write E. This PR extends both the regular and multiABD contraction ops with improved checks and also adds new instances with smaller values of ScalarPerVector to support instances that are not supported by other instances.
-
- 08 Dec, 2023 3 commits
-
-
Illia Silin authored
-
Nicolas Macchioni authored
-
Bartłomiej Kocot authored
* Support broadcast for bias in grouped conv fwd * Fix comment * Comment fixes * Remove GK layout
-
- 05 Dec, 2023 1 commit
-
-
Jun Liu authored
-
- 03 Dec, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
This PR introduces support for double buffering in LDS into GEMM kernels that use direct load instructions. Direct loads now use inline asm instead of intrinsics. Usage of intrinsics results in compiler adding additional waitcnt instructions what breaks possible load/compute overlap in case of double buffering. Usage of inline asm results in the need to use sched_barrier in order to make sure that compiler cannot incorrectly reschedule instructions since it does not know the data dependencies between global->LDS and LDS->registers.
-
- 28 Nov, 2023 1 commit
-
-
Illia Silin authored
* spolit the static library into several * update lib paths and fix client example * do not use device_mha_operarions for client examples * use appropriate libs to link to client examples * remove the gpu/transpose path from the list * try fixing clinet examples 3,4,9 * add necessary libs for client examples * fix the layernorm client example * fix the client examples 23 and 24 * fix typo * add interface library and refresh clang format
-
- 25 Nov, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
* Add basic support for direct loads from global to LDS * Clean the code and comments * Add support for fp16 * Add comments * Add check for thread cluster lengths * Align non-direct-load fp16 example * Small fixes * Extend IsSupported to check for supported GPU gens * Build examples only on the supported HW * Do not throw when instance not supported in 04 example * Review: Apply review suggestions * Review: small fix * Review: small fix
-
- 17 Nov, 2023 1 commit
-
-
zjing14 authored
* improve 4k gemm perf * add f8 instances * format --------- Co-authored-by:Jing Zhang <jizha@amd.com>
-
- 14 Nov, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Introduce multiABD api and deprecate multiD * Replace multiD with multiABD * Mark structures as deprecated * Change doxygen deprecated to note to avoid warnings
-
- 11 Nov, 2023 1 commit
-
-
zjing14 authored
* add more instances for bfp16 * reduce the gemm input values to prevent round-off errors --------- Co-authored-by:
Jing Zhang <jizha@amd.com> Co-authored-by:
illsilin <Illia.Silin@amd.com>
-
- 10 Nov, 2023 2 commits
-
-
Bartłomiej Kocot authored
* Support multi AB for grouped conv fwd xdl * Add instances * Add client example * Add example * Add interface test * Minor fixes Minor fixes Minor fixes * Comment fixes * Fixes * Reference fix * Test xdl fixes * Improve multi_ab interface test
-
rocking authored
* Add layernorm backward reference code * Add groupnorm backward reference code * Add example * clang format * Fixc bug of reference layernorm and groupnorm * Fix naming * Refine naming * Add device op for normalization bwd gamma and beta * Refine template parameter * Add bwd gamma & beta of kernel * 1. Add groupnorm example 2. Refine layernorm naming * Narrow down the static check for performance * Refine variable name
-
- 09 Nov, 2023 2 commits
-
-
arai713 authored
* added working example for 5D input using 1D kernel * example with 5D input tensor and 2d kernel - not working: issues with arguments * added updated version of 3d device op - changed descriptors/dims * added example file to check kernel * fixed descriptor and isSupportedArgument stride problem * added and modified kernel for 3d - updated tids/loop * adding some more 5d example files * fixed some issues * changes made for testing * working version: fixed error in stride for A, still a bit inefficient * cleaned up formatting/comments * updating formatting * more formatting fixes * fixing cmake, adding back gpu targets in cmake script * adding client example * added instances for client example * fixed errors in client example * implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp * removed extra files * minor formatting and naming fixes * adding test files and profiler * fixing minor error * minor fix * removed unneccesary comments, renamed files * updated instance list for client example, added different layout example * removing instances * fixed error in instance generation * remove comments * update profiler and client example tensor layouts * fixed errors in test/profiler * updated vector dim access to enable vector load * updated test/profiler files * updated example with 1d kernel * updating profiler * renamed files --------- Co-authored-by:Jing Zhang <jizha@amd.com>
-
rocking authored
* Rename folder * Add layernorm 4d fwd example * Rename original layernorm example * Add layernorm 4d f16 test * Add layernorm4d_fwd client example * Support layernorm4D in ckProfiler * Rename groupnorm to groupnorm fwd in example * Rename layernorm and group fwd in test * Rename normalization to normalization_fwd (instances) * Add fwd to DeviceNormalization * Rename external api header * Rename folder, because we can also add bwd in this folder * Add fwd in layernorm and groupnorm (profiler * Fix compile error --------- Co-authored-by:Po Yen Chen <PoYen.Chen@amd.com>
-
- 07 Nov, 2023 2 commits
-
-
zjing14 authored
* improve kpad * more tuning parameters * f16_f8_fp16 * cut test time * add f16_f8_fp16 * add f16_f8_f16 * testing instances for skinny cases * format * clean * add fp16_f8_fp16 * clang-format * add grouped gemm instalces * fixed profile grouped_gemm * clean * clean * clean * clean * clean * add missing instance func * fixed inferface --------- Co-authored-by:
Jing Zhang <jizha@amd.com> Co-authored-by:
root <root@sh5-1e707-rc06-38.mkm.dcgpu>
-
Daming Feng authored
* add compute type check for fp16 in forward convolution instances * Add compute type check for default compute types --------- Co-authored-by:Bartlomiej Kocot <barkocot@amd.com>
-
- 02 Nov, 2023 1 commit
-
-
Bartlomiej Wroblewski authored
* Add support for mixed precision in contraction scale and bilinear (#936) * Extract common functionality to separate files * Reference contraction: Remove incorrect consts from type_converts * Reference contraction: Add missing type_convert for dst value * Reference contraction: Fix incorrect order of B matrix dimensions * Add support for mixed precision in contraction scale and bilinear * Move using statements from instances to a common file * Move using statements from examples to a common file * Fix the order of B matrix dimensions across examples and profiler * Fix the computation of error threshold * Make ComputeDataType an optional argument * Include possible DataType -> ComputeDataType casting error in the threshold * Remove commented code * Make the ComputeDataType an optional argument in instance --------- Co-authored-by:Illia Silin <98187287+illsilin@users.noreply.github.com>
-
- 01 Nov, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Add ScaleAddScaleAddRelu post op for conv fwd * Fixes * Fix instance file name * Minor fix
-
- 31 Oct, 2023 2 commits
-
-
Po Yen Chen authored
* Disable the SLP vectorizer to prevent unnecessary wait * Add comment to the reason of adding flag * Fix wording
-
Bartłomiej Kocot authored
* Add support for groups in Img2Col/Col2Img * Fix interface test * Fix interface test G to N * Improve performance * Change gemm layout to 3d * Fixes
-
- 23 Oct, 2023 1 commit
-
-
zjing14 authored
* add mnk padding for fp8 * add padding for row_col layout * added padding for fp32 --------- Co-authored-by:Jing Zhang <jizha@amd.com>
-
- 21 Oct, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Fix instances dtype check * Fix source dtypes seletor for examples and tests * Sync with new cmakefile changes * Remove not needed ifdefs * Remove not needed ifdefs
-
- 19 Oct, 2023 3 commits
-
-
Qianfeng authored
* reinterpret_cast to const char* in dumpBufferToFile to be compatible with both const and non-const input pointers * Add seed input to GeneratorTensor_4 for normal_distribution generator * Add GetTypeString() for DeviceElementwiseImpl * Add HIP_CHECK_ERROR macro
-
Bartłomiej Kocot authored
* Extend available elementwise operations with conv examples * Fixes * Remove not needed convert * Update CMakeFile and dir name
-
Po Yen Chen authored
* Avoid force setting ENABLE_PIPELINE_V2_OPT to OFF * Remove compilation option variable MAX_ILP_OPTS
-
- 18 Oct, 2023 2 commits
-
-
rocking authored
* save mean and inverse std in normalization * Save mean and inverse std in splitK * Vector save mean and inv std * Modify instance for save mean and std * simplify the layernorm example * Save mean and std in groupnorm example * Save mean and inv std in ckProfiler and test * Remove compute data type from base class * Save mean and inv std in client example * Add changelog * clang format * Fix compile error * Refine naming * Avoid error in bf16 * revert changelog
-
zjing14 authored
* Add a condition to build fp8 instances * simplified buffer_load/store * add bfp8/fp8 * fixed * remove all f8/bf8 condition include folder * fixed cmake conditions * fixed DTYPES=fp16/bfp16 * fix * fixed buffer_load * fixed buffer_store * fix * clean example cmake files * fixed ci * fixed cit --------- Co-authored-by:
Rostyslav Geyyer <rosty.geyyer@amd.com> Co-authored-by:
Jing Zhang <jizha@amd.com>
-
- 17 Oct, 2023 1 commit
-
-
Bartłomiej Kocot authored
* Add grouped conv bwd weight wmma * Update README, changelog, profiler * Minor fixes * Fix grouped conv bwd wei dl kernel * Minor fixes * Minor stylistic fixes
-
- 13 Oct, 2023 1 commit
-
-
Rostyslav Geyyer authored
* Add ComputeType * Update for compatibility * Add instances * Update profiler api
-
- 11 Oct, 2023 2 commits
-
-
Adam Osewski authored
* Introduce LocalBlockToCTileMap. * Change the signature of CalculateBottomIndex() function which now does not accept any argument. The B2C map which is already passed as an argument to the kernel Run function is calculating block's local id already outside at kernel entry point __global__ function. The LocalB2C map stores as members local block ID. * Use LocalBlockToCTile map in device ops. * First draft of tile loop work distribution. * Fix typo. * Simplify kernel arguments. Calculate descriptors & B2C maps on the device. * Use looping kernel. * Fix B2C constructor. * Fix Navi21 errors. * Calculate tile start/end in device kernel. * Change Run API to accept user provided workspace buffer. * Add new line at EOF. * Move Gemm KernelArguments to device op interface. * Remove unused code. * Update API. * Launch grid size which is min of occupancy vs tile count * Get back to use constant memory for gemm descriptors. * Remove unused code. * Add default virtual method implementation. * Update comments to conform with doxygen style. * Fix doc style and unused parameters. * Add thread cluster lengths to kernel name. * Remove old splitk impl and replace it with tile looping one. * Modify instances. * set KPerBlock to 64 * maximize wherever possible vector load size. * Fix instances cluster lengths. * Change comment style. * Use 128b store where possible in instances. * Update test cases, since KPerBlock has doubled. * Update output stream operator for Sequence. * Add pipeline version to GroupedGEMM device op type string. * Fix pipeline version type logging. * Fix input tensors type after merge. * Fix compiler error. * Fix output stream operator for Pipeline version. * Store using 128b. * Set of instances with kpb 32/64 * Limit number of instances * Remove commented out instances. * Fix function name. * Limit the number of instances. Add pipline version to the regular instances * Change thr cluster layout for reading B tensor. * disabled failed instances --------- Co-authored-by:
Adam Osewski <aosewski@amd.com> Co-authored-by:
zjing14 <zhangjing14@gmail.com> Co-authored-by:
Jing Zhang <jizha@amd.com>
- 05 Oct, 2023 2 commits
-
-
Lauren Wrubleski authored
-
Illia Silin authored
* Revert "Add support for mixed precision in contraction scale and bilinear (#936)" This reverts commit f0748506. * revert commits #957 and #960
-
- 04 Oct, 2023 1 commit
-
-
zjing14 authored
* Add f8 bf8 gemm example * Add element-wise ops * Add intrinsics * Update reference calculation * Add an additional type option for xdlops gemm * Fix build process * Add bf8 to buffer addressing * Update blockwise op, split typeA and typeB * Update for compatibility * Uppdate naming to f8->fp8 * Update naming * Format * Update naming (#937) * Add a client example * Add computetypes to device and gridwise ops * Add instances, update instance factory * Format * Fix a flag * Add ckProfiler mode * Fix typos * Add an example * Add bf8 generator * add bf8 mfma; fixed type_convert for bf8 * move verfication ahead of timing * Update reference calculation * Fix reference * Narrow down float init range * Fix bf8 bf8 mfma * Add bf8 @ fp8 mfma * Update example * Update instances * Update profiler api * Update for compatibility * Format * Remove extra example * Clean up * workaround convert * added instance of f16_bf8f8, and client example * fixed mfma selector * format --------- Co-authored-by:
Rostyslav Geyyer <rosty.geyyer@amd.com> Co-authored-by:
Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com> Co-authored-by:
Jing Zhang <jizha@amd.com>
-