1. 18 Oct, 2023 4 commits
    • rocking's avatar
      Layernorm and groupnorm support to save mean and inverse std in forward (#929) · 3696fe1c
      rocking authored
      * save mean and inverse std in normalization
      
      * Save mean and inverse std in splitK
      
      * Vector save mean and inv std
      
      * Modify instance for save mean and std
      
      * simplify the layernorm example
      
      * Save mean and std in groupnorm example
      
      * Save mean and inv std in ckProfiler and test
      
      * Remove compute data type from base class
      
      * Save mean and inv std in client example
      
      * Add changelog
      
      * clang format
      
      * Fix compile error
      
      * Refine naming
      
      * Avoid error in bf16
      
      * revert changelog
      3696fe1c
    • zjing14's avatar
      fixed math-ci error; suspend a warning (#996) · 58338bb2
      zjing14 authored
      
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      58338bb2
    • zjing14's avatar
      Clean DTYPES conditions in CMake (#974) · bf435140
      zjing14 authored
      
      
      * Add a condition to build fp8 instances
      
      * simplified buffer_load/store
      
      * add bfp8/fp8
      
      * fixed
      
      * remove all f8/bf8 condition include folder
      
      * fixed cmake conditions
      
      * fixed DTYPES=fp16/bfp16
      
      * fix
      
      * fixed buffer_load
      
      * fixed buffer_store
      
      * fix
      
      * clean example cmake files
      
      * fixed ci
      
      * fixed cit
      
      ---------
      Co-authored-by: default avatarRostyslav Geyyer <rosty.geyyer@amd.com>
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      bf435140
    • zjing14's avatar
      Add contraction_multi_abd (#972) · 1cc36ba5
      zjing14 authored
      
      
      * add gridwise_multi_abd
      
      * move element_op into RunRead
      
      * merge element_wise op with data read
      
      * add multiABD example
      
      * allow packed elementwise_op
      
      * changed example
      
      * clean
      
      * clean
      
      * add is_detected
      
      * fix
      
      * minor fix
      
      * add scaleAdd_vec4 example
      
      * init commit for contraction_multi_ABD
      
      * add examples
      
      * add examples of multiA and broadcast
      
      * update example
      
      * fixed comments
      
      * Update cmake-ck-dev.sh
      
      * Update cmake-ck-dev.sh
      
      * Add comments into the example
      
      * Update CMakeLists.txt
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      1cc36ba5
  2. 17 Oct, 2023 2 commits
  3. 16 Oct, 2023 2 commits
    • zjing14's avatar
      workaround with float (#992) · 39430bfd
      zjing14 authored
      
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      39430bfd
    • Illia Silin's avatar
      Add hipTensor build and test to CK CI. (#990) · 707ad002
      Illia Silin authored
      * add a hipTensor test to CI
      
      * use jenkins git plugin
      
      * change hipTensor folder location in CI
      
      * change the git method for hipTensor
      
      * run tests usign ctest
      
      * check the hipTensor contents
      
      * only build hipTensor on MI100/200
      
      * pull hipTensor as zip archive
      
      * fix jenkins syntax
      
      * add path to the CK installation
      
      * combine build commands into one shell
      
      * change jenkins syntax for CK installer path
      
      * try different syntax
      
      * allow unzip overwrite
      
      * fix jenkins file syntax
      
      * remove any old versions of hipTensor before building
      
      * add option to select hipTensor branch for testing
      707ad002
  4. 13 Oct, 2023 2 commits
  5. 12 Oct, 2023 2 commits
  6. 11 Oct, 2023 2 commits
    • zjing14's avatar
      Revert "Grouped Gemm with looping over the tiles. (#788)" (#982) · c99323be
      zjing14 authored
      This reverts commit a4f72a31.
      c99323be
    • Adam Osewski's avatar
      Grouped Gemm with looping over the tiles. (#788) · a4f72a31
      Adam Osewski authored
      
      
      * Introduce LocalBlockToCTileMap.
      
      * Change the signature of CalculateBottomIndex() function which now does
      not accept any argument. The B2C map which is already passed as an
      argument to the kernel Run function is calculating block's local id
      already outside at kernel entry point __global__ function.
      The LocalB2C map stores as members local block ID.
      
      * Use LocalBlockToCTile map in device ops.
      
      * First draft of tile loop work distribution.
      
      * Fix typo.
      
      * Simplify kernel arguments.
      
      Calculate descriptors & B2C maps on the device.
      
      * Use looping kernel.
      
      * Fix B2C constructor.
      
      * Fix Navi21 errors.
      
      * Calculate tile start/end in device kernel.
      
      * Change Run API to accept user provided workspace buffer.
      
      * Add new line at EOF.
      
      * Move Gemm KernelArguments to device op interface.
      
      * Remove unused code.
      
      * Update API.
      
      * Launch grid size which is min of occupancy vs tile count
      
      * Get back to use constant memory for gemm descriptors.
      
      * Remove unused code.
      
      * Add default virtual method implementation.
      
      * Update comments to conform with doxygen style.
      
      * Fix doc style and unused parameters.
      
      * Add thread cluster lengths to kernel name.
      
      * Remove old splitk impl and replace it with tile looping one.
      
      * Modify instances.
      
      * set KPerBlock to 64
      * maximize wherever possible vector load size.
      
      * Fix instances cluster lengths.
      
      * Change comment style.
      
      * Use 128b store where possible in instances.
      
      * Update test cases, since KPerBlock has doubled.
      
      * Update output stream operator for Sequence.
      
      * Add pipeline version to GroupedGEMM device op type string.
      
      * Fix pipeline version type logging.
      
      * Fix input tensors type after merge.
      
      * Fix compiler error.
      
      * Fix output stream operator for Pipeline version.
      
      * Store using 128b.
      
      * Set of instances with kpb 32/64
      
      * Limit number of instances
      
      * Remove commented out instances.
      
      * Fix function name.
      
      * Limit the number of instances.
      
      Add pipline version to the regular instances
      
      * Change thr cluster layout for reading B tensor.
      
      * disabled failed instances
      
      ---------
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      a4f72a31
  7. 10 Oct, 2023 2 commits
  8. 05 Oct, 2023 3 commits
  9. 04 Oct, 2023 3 commits
    • zjing14's avatar
      Grouped conv bwd data with fp16 input and bf8fp8 comp (#962) · 04f93aad
      zjing14 authored
      
      
      * Add f8 bf8 gemm example
      
      * Add element-wise ops
      
      * Add intrinsics
      
      * Update reference calculation
      
      * Add an additional type option for xdlops gemm
      
      * Fix build process
      
      * Add bf8 to buffer addressing
      
      * Update blockwise op, split typeA and typeB
      
      * Update for compatibility
      
      * Uppdate naming to f8->fp8
      
      * Update naming
      
      * Format
      
      * Update naming (#937)
      
      * Add a client example
      
      * Add computetypes to device and gridwise ops
      
      * Add instances, update instance factory
      
      * Format
      
      * Fix a flag
      
      * Add ckProfiler mode
      
      * Fix typos
      
      * Add an example
      
      * Add bf8 generator
      
      * add bf8 mfma; fixed type_convert for bf8
      
      * move verfication ahead of timing
      
      * Update reference calculation
      
      * Fix reference
      
      * Narrow down float init range
      
      * Fix bf8 bf8 mfma
      
      * Add bf8 @ fp8 mfma
      
      * Update example
      
      * Update instances
      
      * Update profiler api
      
      * Update for compatibility
      
      * Format
      
      * Remove extra example
      
      * Clean up
      
      * workaround convert
      
      * added instance of f16_bf8f8, and client example
      
      * fixed mfma selector
      
      * format
      
      ---------
      Co-authored-by: default avatarRostyslav Geyyer <rosty.geyyer@amd.com>
      Co-authored-by: default avatarRostyslav Geyyer <46627076+geyyer@users.noreply.github.com>
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      04f93aad
    • Rostyslav Geyyer's avatar
      Add conv bwd weight fp16 comp bf8 fp8 op, instances and example (#945) · 42facfc6
      Rostyslav Geyyer authored
      
      
      * Add f8 bf8 gemm example
      
      * Add element-wise ops
      
      * Add intrinsics
      
      * Update reference calculation
      
      * Add an additional type option for xdlops gemm
      
      * Fix build process
      
      * Add bf8 to buffer addressing
      
      * Update blockwise op, split typeA and typeB
      
      * Update for compatibility
      
      * Uppdate naming to f8->fp8
      
      * Update naming
      
      * Format
      
      * Update naming (#937)
      
      * Add a client example
      
      * Add computetypes to device and gridwise ops
      
      * Add instances, update instance factory
      
      * Format
      
      * Fix a flag
      
      * Add ckProfiler mode
      
      * Fix typos
      
      * Add an example
      
      * Add bf8 generator
      
      * add bf8 mfma; fixed type_convert for bf8
      
      * move verfication ahead of timing
      
      * Update reference calculation
      
      * Fix reference
      
      * Narrow down float init range
      
      * Fix bf8 bf8 mfma
      
      * Add bf8 @ fp8 mfma
      
      * Update example
      
      * Update instances
      
      * Update profiler api
      
      * Update for compatibility
      
      * Format
      
      * Remove extra example
      
      * Clean up
      
      * workaround convert
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      42facfc6
    • zjing14's avatar
      3d grouped conv fwd with input/output fp16 and comp fp8 (#931) · e921e1f0
      zjing14 authored
      
      
      * add f8 comp instance
      
      * fixed
      
      * fixed comments
      
      * rename
      
      * fixed dtype
      
      * format
      
      * fixed CI
      
      * fixed ci
      
      * add missing ComputeType
      
      * fixed cit
      
      * fixed
      
      * Update cmake-ck-dev.sh
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      e921e1f0
  10. 03 Oct, 2023 3 commits
  11. 02 Oct, 2023 3 commits
    • Rostyslav Geyyer's avatar
      Add fp8 @ bf8 gemm support and example (#933) · bd09b5c5
      Rostyslav Geyyer authored
      * Add f8 bf8 gemm example
      
      * Add element-wise ops
      
      * Add intrinsics
      
      * Update reference calculation
      
      * Add an additional type option for xdlops gemm
      
      * Fix build process
      
      * Add bf8 to buffer addressing
      
      * Update blockwise op, split typeA and typeB
      
      * Update for compatibility
      
      * Uppdate naming to f8->fp8
      
      * Update naming
      
      * Format
      bd09b5c5
    • Illia Silin's avatar
      59dbb01f
    • zjing14's avatar
      Contraction multi abd (#957) · 9d58c421
      zjing14 authored
      
      
      * add gridwise_multi_abd
      
      * move element_op into RunRead
      
      * merge element_wise op with data read
      
      * add multiABD example
      
      * allow packed elementwise_op
      
      * changed example
      
      * clean
      
      * clean
      
      * add is_detected
      
      * fix
      
      * minor fix
      
      * add scaleAdd_vec4 example
      
      * init commit for contraction_multi_ABD
      
      * add examples
      
      * add examples of multiA and broadcast
      
      * update example
      
      * fixed comments
      
      * Update cmake-ck-dev.sh
      
      * Update cmake-ck-dev.sh
      
      * Add comments into the example
      
      ---------
      Co-authored-by: default avatarJing Zhang <jizha@amd.com>
      9d58c421
  12. 29 Sep, 2023 2 commits
    • Illia Silin's avatar
      6b5f6473
    • Bartlomiej Wroblewski's avatar
      Add support for mixed precision in contraction scale and bilinear (#936) · f0748506
      Bartlomiej Wroblewski authored
      * Extract common functionality to separate files
      
      * Reference contraction: Remove incorrect consts from type_converts
      
      * Reference contraction: Add missing type_convert for dst value
      
      * Reference contraction: Fix incorrect order of B matrix dimensions
      
      * Add support for mixed precision in contraction scale and bilinear
      
      * Move using statements from instances to a common file
      
      * Move using statements from examples to a common file
      
      * Fix the order of B matrix dimensions across examples and profiler
      
      * Fix the computation of error threshold
      
      * Make ComputeDataType an optional argument
      
      * Include possible DataType -> ComputeDataType casting error in the threshold
      
      * Remove commented code
      f0748506
  13. 28 Sep, 2023 2 commits
  14. 27 Sep, 2023 5 commits
  15. 26 Sep, 2023 3 commits