1. 30 Nov, 2022 2 commits
    • rocking5566's avatar
      gemm, conv perchannel quantization (#503) · ad541ad6
      rocking5566 authored
      * Use gemm_multiple_D instead
      
      * Add gemm bias relu quantization example
      
      * Add pure gemm quantization example
      
      * Add quantization of perchannel conv + bias + relu example
      
      * Refine the code
      
      * Rename multiplier to requant_scale
      
      * Rename the folder
      
      * Remove redundant comment
      
      * Rename the file. Prepare to add perchannel
      
      * Add conv perchannel instance
      
      * Move to quantization folder
      
      * Add conv perchannel client example
      
      * Apply Rangify constructor of HostTensorDescriptor & Tensor<>
      
      * Fix merge error
      ad541ad6
    • Qianfeng's avatar
      BatchNorm backward instance/external API/profiler/tests (#519) · 63af525c
      Qianfeng authored
      * Refine the device batchnorm-backward base API templates and data type assignments
      
      * Remove duplicated kernel file
      
      * Add batchnorm backward instances and external API
      
      * Add batchnorm-backward profiler and tests
      
      * Add client example which uses batchnorm backward external API
      
      * Merge test/batchnorm_fwd and test/batchnorm_bwd into one directory
      
      * Loose the threshold for batchnorm-backward check_err()
      63af525c
  2. 29 Nov, 2022 2 commits
    • fsx950223's avatar
      fix GetTypeString · 0e9c88ce
      fsx950223 authored
      0e9c88ce
    • Qianfeng's avatar
      BatchNorm backward implementation (#461) · 44789d99
      Qianfeng authored
      * Implemented batchnorm-backward Blockwise and Multiblock kernels
      
      * Add batchnorm-backward device op
      
      * Add batchnorm-backward host-reference op
      
      * Add batchnorm-backward example
      
      * Parameters renaming in batchnorm backward kernels and device op
      
      * Change in the example to loose the threshold for ScaleDiff checking
      
      * Add comments to explain the implementation of batchnorm-backward
      
      * Parameters renaming again in batchnorm backward kernels
      
      * Improve the expression calculation for performance
      
      * Add batchnorm backward to README
      
      * Add comments to explain inv-variance in batchnorm forward and backward
      
      * Renaming the batchnorm forward training and inferring examples
      
      * Add/update the comments for batchnorm-backward kernels
      
      * Renaming again
      
      * Add block_sync_lds between two consecutive blockwise reductions
      
      * Move common expression 1/N out of the static_for loops
      
      * Add dy_elementwise_op
      
      * Renaming in backward example again
      
      * Add checking for reduceDims in reference_batchnorm_backward
      
      * Update to comments and codes format
      
      * Rename in the comments
      
      * Remove common expression out of the loop in reference_batchnorm_backward_nhwc_c
      
      * Add block_sync_lds() between blockwise reduction again
      
      * Fix comments again
      
      * Remove int8 from batchnorm-forward instances since it is not needed for forward training and could fail test
      44789d99
  3. 25 Nov, 2022 1 commit
    • Qianfeng's avatar
      BatchNorm forward instance/external api/profiler/tests/client example (#511) · 4e6a5575
      Qianfeng authored
      
      
      * Update to device_batchnorm_forward base class to include all template parameters for problem description
      
      * Add batchnorm forward instances and external api
      
      * Add batchnorm forward profiler module which uses the external api
      
      * Add some comments in batchnorm_forward example to explain the dimensions in lengths[]
      
      * Replace the reference_batchnorm_forward_nhwc_c by generic reference_batchnorm_forward
      
      * Improvement to the batchnorm infer base API
      
      * Add batchnorm forward client example which shows using the batchnorm forward external API
      
      * Add test for batchnorm forward
      
      * Tuning the batchnorm profiler initialized values and error threshold
      
      * Add support for bhalf_t in instances/external api/tests
      
      * Add support for int8_t in instances/external api/tests
      
      * Add support for double in instances/external api/tests
      
      * Let ScaleDataType and BiasDataType be same as XDataType and YDataType when creating instances
      
      * Checking before running best instance in batchnorm_fwd_nhwc client example
      
      * Add checking for YElementwiseOp in batchnorm_forward external API
      
      * Add more types in batchnorm forward profiler
      
      * Add more test lengths
      Co-authored-by: default avatarrocking5566 <ChunYu.Lai@amd.com>
      4e6a5575
  4. 20 Nov, 2022 1 commit
  5. 17 Nov, 2022 1 commit
  6. 15 Nov, 2022 4 commits
    • guangzlu's avatar
      Add BF16 tests for batched_gemm_softmax_gemm_permute (#504) · 4c4c7328
      guangzlu authored
      
      
      * fixed bug in softmax reference & add bf16 examples for batched_gemm_scale_softmax_gemm
      
      * added bf16 tests for batched_gemm_softmax_gemm_permute
      
      * changed format of device_batched_gemm_softmax_gemm_permute_xdl_cshuffle_bf16_bf16_bf16_bf16_gmk_gnk_gno_gmo_instance.cpp
      
      * changed format device_batched_gemm_softmax_gemm_permute_xdl_cshuffle_bf16_bf16_bf16_bf16_gmk_gnk_gno_gmo_instance.cpp
      
      * aligned annotations
      
      * modified CMakeLists for examples
      
      * add common example code of fp16/bf16 version for batched_gemm_scale_softmax_gemm_xdl
      
      * use macro to control the instances
      
      * added macro control into instances
      
      * clang-format some files
      
      * changed error tolerance for bf16
      
      * changed index for 10_elementwise_normalization
      
      * fixed xdlops code bug in amd_xdlops.hpp
      Co-authored-by: default avatarPo Yen Chen <PoYen.Chen@amd.com>
      4c4c7328
    • ltqin's avatar
      Add Conv Backward Data on Navi21 for ResNet50 (#499) · db0eb1ea
      ltqin authored
      
      
      * start add example
      
      * add device dl
      
      * change launch kernel
      
      * change init data method
      
      * change example config
      
      * add config valid check
      
      * add instance for dl bwd
      
      * add instance to ckProfiler
      
      * reserver to profiler and cmakelist
      
      * add instance to ckProfiler2
      
      * change instance f32 config
      
      * fix example return value
      Co-authored-by: default avatarletaoqin <letaoqin@amd.com>
      Co-authored-by: default avatarPo Yen Chen <PoYen.Chen@amd.com>
      db0eb1ea
    • Po Yen Chen's avatar
      7038723a
    • Po Yen Chen's avatar
      Introduce ck::accumulate_n() (#439) · 730204ee
      Po Yen Chen authored
      We can use this template to eliminate duplicated iterator computing
      logics. By providing return type to ck::accumulate_n(), we can avoid
      type conversion operations.
      730204ee
  7. 11 Nov, 2022 1 commit
  8. 10 Nov, 2022 4 commits
    • guangzlu's avatar
      add client example for elementwise_normalization (#501) · 70456328
      guangzlu authored
      * add client example for elementwise_normalization
      
      * clang format elementwise_layernorm2d.cpp
      
      * changed some naming to make it more understandable
      
      * changed naming of input into ab_input
      
      * fixed bug for threadwise_x_store
      
      * add elementwise operation to reference
      70456328
    • Po Yen Chen's avatar
      Add client example of grouped conv2d forward (data type: fp16) (#488) · f4980310
      Po Yen Chen authored
      * Rename example folder for GroupedConvFwdMultipleD
      
      * Unify example codes
      
      * Change target names
      
      * Add fp16 example for multiple d instance
      
      * Re-format common.hpp
      
      * Add interface 'DeviceGroupedConvFwd'
      
      * Use simpler interface
      
      * Move common conv params out
      
      * Rename conv fwd client example folder
      
      * Add missing include directive
      
      * Update grouped conv instance implementations
      
      * Simplify ckProfiler (grouped conv forward)
      
      * Use GroupedConvFwd to implement client example
      
      * Use greater groupe count in example
      
      * Add custom target to group examples
      
      * Add extra tag param to instance factory function
      
      * Use tag to differentiate factory functions
      
      * Add missing tag argument for factory function
      
      * Remove inheritance relationship
      
      * Remove no-longer used include directive
      
      * Add license in front of file
      f4980310
    • Po Yen Chen's avatar
      Add client example of grouped conv2d backward weight (data type: fp16) (#498) · 38470e04
      Po Yen Chen authored
      * Remove redundant CMake setting
      
      * Extract common code from files
      
      * Rename folder 'convnd' to 'conv'
      
      * Use std::array<> to accept compile-time kwnown # of arguments
      
      * Fix compilation error of tuning parameter
      
      * In example, use same setting as unit-test
      
      * Remove no-longer used include directive
      
      * Add interface for grouped conv bwd weight
      
      * Add group support for conv bwd weight
      
      * Add grouped conv bwd weight example
      
      * Use group parameter in example
      
      * Rename example folder
      
      * Remove non-grouped version example source files
      
      * Rename device op template
      
      * Add group support to convolution backward weight
      
      * Remove debug messages
      
      * Use smaller group size in example
      
      * Use named variable as loop terminate condition
      
      * Prettify example output message
      
      * Enlarge used grid size
      
      * Allow real grid size exceeds expected grid size
      
      * Rename interface file
      
      * Add client example for group...
      38470e04
    • Po Yen Chen's avatar
      Remove interface 'DeviceGroupedConvBwdData' (#500) · 67423a22
      Po Yen Chen authored
      * Remove interface 'DeviceGroupedConvBwdData'
      
      * Remove no-longer needed include directive
      
      * Rename client example folder
      67423a22
  9. 03 Nov, 2022 1 commit
    • guangzlu's avatar
      Fused elementwise normalization (#492) · 8a4253ba
      guangzlu authored
      * add fused addition lyernorm
      
      * add fused addition lyernorm
      
      * changed CMakelist
      
      * removed annotates
      
      * modified descriptor of C
      
      * fixed bug in gridwise add layernorm
      
      * format the files
      
      * modified name from add&layernorm into elementwise&layernorm
      
      * created fused elementwise layernorm branch
      
      * change input into tuple type
      
      * add sweep once to reduce load & read of C from global memory
      
      * modified Argument api
      
      * modified way to malloc c in global memory
      
      * changed gamma and beta to m_k_desc
      
      * fixed bug when sweep once and move CDataType when define device level struct
      
      * add src dim for gamma and beta
      
      * implement optimization for coalesced
      
      * delete a annotation line
      
      * fixed some bug to meet the requirements of ck
      
      * add bandwidth computing in example, and fixed the time unit
      
      * move device_elementwise_layernorm_impl.hpp into device/impl
      
      * fixed bug in device_elementwise_layernorm_impl.hpp
      
      * changed name from layernorm into normalization
      
      * clang-format the changed files
      
      * changed the names
      
      * moved immidiate results into lds, it become faster in non-sweeponce cases
      
      * changed naming of C into X to make the defination more clear
      
      * changed naming in example
      
      * add tests for elementwise normalization
      
      * move example_elementwise_layernorm_blockwise into folder 44_elementwise_normalization
      
      * move test_elementwise_layernorm_fp16 into new folder
      
      * move elementwise_normalization_instances into a new folder
      
      * add more tests in test_elementwise_layernorm_fp16.cpp
      
      * added some corner cases in test
      
      * fixed method to compute lds size for matrix X
      
      * changed name of 44_elementwise_normalization into 45_elementwise_normalization
      
      * modified some comments
      
      * modified some other confused comments
      
      * reduce redundant tests in test_elementwise_layernorm_fp16.cpp
      8a4253ba
  10. 02 Nov, 2022 6 commits
    • rocking5566's avatar
      Refine layernorm naming and test code (#497) · d4d1147f
      rocking5566 authored
      * Sync the naming
      
      * Sync the test of layernorm with groupnorm
      
      * Sync the naming
      
      * Minor change for comment and log
      
      * [What] Add saveMean and SaveInvVariance in the interface.
      [Why] These can optimize the backward
      d4d1147f
    • Anthony Chang's avatar
    • Po Yen Chen's avatar
      Add client example of grouped conv2d backward data (data type: fp16) (#481) · 9e57a290
      Po Yen Chen authored
      * Improve example reusability
      
      * Remove no-longer used file
      
      * Rename folder of grouped_conv_bwd_data example
      
      * Add normal grouped conv bwd example
      
      * Add interface 'DeviceGroupedConvBwdData'
      
      * Prettify comment of device op type arguments
      
      * Add grouped conv2d/conv3d backward data fp16 instances
      
      * Fix wrong template argument
      
      * Add grouped_conv2d_bwd_data client example
      
      * Use simpler expression to calculate memory size
      
      * Fix formating
      
      * Remove grouped_conv3d_bw_data instances
      
      Underlying device operator is not ready to handle 3D input
      
      * Remove no-longer necessary include directive
      
      * Add missing include directive
      
      * Use more realistic conv param in example
      9e57a290
    • Rostyslav Geyyer's avatar
      Add pipeline v1/v2 selector, add more instances (#381) · 1a0b0e7b
      Rostyslav Geyyer authored
      
      
      * Add gridwise gemm pipeline v1/v2 selector
      
      * Pipeline selector working, test-wise add pipeline options to one instance
      
      * Add gemm instances
      
      * Add debug info to DeviceGemmXdl
      
      * Add debug info to DeviceGemmXdl_CShuffle
      
      * Add debug info to DeviceGemmXdl_CShuffle and instances to gemm_add_add_fastgelu
      
      * Minor fix
      
      * Add debug info to DeviceBatchedGemmXdl and instances to batched_gemm
      
      * set up inter-wave configuration
      
      * use defualt loop scheduling for supported gemm ops
      
      for blanket-applying interwave scheduling for all supported gemm ops, define macro CK_EXPERIMENTAL_DEFAULT_TO_INTER_WAVE_SCHEDULING=1. this should be discouraged though as it is not covered by CI
      
      * Add enum PipelineVersion
      
      * Update instances
      
      * Format
      
      * Fix the merge conflict
      
      * Add flags to disable added instances
      
      * Test disable flag check
      
      * Disable flag check
      
      * Enable the instances
      Co-authored-by: default avatarAnthony Chang <ac.chang@outlook.com>
      1a0b0e7b
    • Adam Osewski's avatar
      Softmax unit-test reduction across all and non innermost dims cases. (#406) · 6d8614ee
      Adam Osewski authored
      
      
      * Add reduction across all dims cases.
      
      * host softmax: handle all reduce
      
      * Test cases when reduced dim is not innermost axis.
      
      * Fix syntax.
      
      * Test non innermost dim for fp32 and int8
      
      * Group test suites wrt NumReduceDim.
      
      * Additionally test failing cases.
      
      * Throw error when Rank or NumReduceDims doesn't match arguments.
      
      * Check reducedDims has correct values
      
      * Move don't reuse DeviceReduceMultiblock IsSupportedArgument method.
      Instead implement own. (in fact just get rid of one check to enable
      reduction across inner dimensions).
      
      * Reorganize unit tests to better cover use scenarios.
      
      * Test input validation
      * Test reduction of inner dimensions with custom op instances.
      
      * Refactor fp32 and int8 unit tests.
      
      * Fix FP32 instance template parameters.
      
      * Add more instances.
      
      * Instances with InSrcVectorDim=0.
      
      * Do not initialize and copy data when arg not supported.
      
      * ckProfiler Softmax use instance factory.
      
      * Refactor device softmax IsSupported.
      
      * Additionally add non-polymorphic api functions
      
      * Split softmax instances into multiple files.
      
      * Fix profiler.
      
      * Reorganize tests to reuse profiler and cover edge cases.
      
      * Clang-format
      
      * I8 Softmax instances along with UT.
      
      * Reuse type alias definitions from instance factory header.
      
      * Clean included headers
      
      * Fix variable names.
      
      * Add missing checks in Argument constructor.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      Co-authored-by: default avatarAnthony Chang <ac.chang@outlook.com>
      6d8614ee
    • rocking5566's avatar
      Conv perlayer int8 quantization (#471) · 226bc02b
      rocking5566 authored
      * Add conv2d requant example
      
      * Fix bash error
      
      * Rename example
      
      * 1. Rename gemm quantization
      2. shares the requantization lambda function with conv
      
      * Refine declare type
      
      * Add conv bias relu quantization exmaple
      
      * clang format
      
      * Fix compile error due to merge develop
      
      * Fix CI error
      
      * Extract quantization post operation into another file
      
      * Support quantization for non piecewise linear function
      
      * Add instance for conv quantization
      
      * Add convolution quantization factory
      
      * Add convolution quantization client example
      
      * Add more instances with different template parameters
      
      * clang format
      
      * Sync the naming with the develop
      226bc02b
  11. 31 Oct, 2022 1 commit
    • ltqin's avatar
      Add Conv Forward on Navi21 for ResNet50 (#490) · 8ee36118
      ltqin authored
      
      
      * add device of dl
      
      * fix k1 of GridwiseGemmDl_km_kn_mn_v1r3
      
      * init version for dl conv
      
      * add example(init)
      
      * result right
      
      * disable elementwise operation
      
      * check parameters
      
      * add fp32,int8 example and change check code
      
      * change deive file and class name
      
      * add check vector access of C
      
      * add instance
      
      * add to ckProfiler
      
      * add Filter1x1Pad0 instances
      
      * fix ignore error
      
      * fix for CI
      Co-authored-by: default avatarletaoqin <letaoqin@amd.com>
      8ee36118
  12. 28 Oct, 2022 1 commit
    • Qianfeng's avatar
      Batchnorm-forward implemented using welford method to calculate variance (#403) · 7fa892e6
      Qianfeng authored
      
      
      * Update to the batchnorm-forward API and base class
      
      * Fix leeked header including in gridwise_set_buffer_value.hpp
      
      * Add kernels and device file for batchnorm-forward welford supporting both blockwise and multi-block reduction
      
      * Update to the batchnorm-forward example to use the new batchnorm-forward device interface
      
      * Change the batchnorm-forward reference to use sequential welford method
      
      * Change to assign the workspace into four buffers in the host layer
      
      * Use GetReduceCountPerThread functor to replace the initial count for Blockwise and Multiblock welford
      
      * Tiny correction and remove un-used file under example/34_batchnorm
      
      * Renaming in the kernel arguments
      
      * Explicitly use ck::math::sqrt in batchnorm-forward kernels
      
      * Add some comments to some kernels
      
      * Tiny fix
      
      * Generalize the data types in reference_batchnorm_forward_nhwc_c
      
      * Use ck::ignore to mark un-used parameters
      
      * Move GetReduceCountPerThread functor codes from kernel to device
      
      * Remove some un-used codes in device_batchnorm_forward_impl.hpp
      
      * Tiny fix in batchnorm_forward example
      
      * Move GetReduceCountPerThread() to welford_helper.hpp
      
      * Use seperate data type for Scale and Bias
      
      * Renaming in device Op
      
      * Tiny fix in forward example
      
      * Updata to batchnorm-infer (type spliting, renaming)
      
      * Add time and bandwidth measurement to the batchnorm-forward example
      
      * Add support of elementwise operation for batchnorm forward output
      
      * Reduce object copying by passing object as reference type
      
      * Tiny change for performance
      
      * Updates for performance again
      
      * Some Renamings
      
      * Add GetActualVariance template parameter for ThreadwiseWelfordMerge
      
      * Tiny update in reference batchnorm forward nhwc/c
      
      * Move batchnorm multiblock kernel files to grid/batchnorm_multiblock sub-directory
      
      * Fuse mean and bias in the normalization calculation
      Co-authored-by: default avatarroot <root@dc-smc-18.amd.com>
      Co-authored-by: default avatarrocking5566 <ChunYu.Lai@amd.com>
      7fa892e6
  13. 27 Oct, 2022 1 commit
    • Anthony Chang's avatar
      Input/output permutation for fused attention (#460) · de37550f
      Anthony Chang authored
      
      
      * reopen masking att instance due to CI is upgraded
      
      * re-enable instances previously failed on 9110
      
      * enable ksize-kpadding pair validity test
      
      * add non-masked attention+permute test; expose masking boolean to attention kernel handles
      
      * disable bench
      
      * fix test
      
      * move files
      
      * bulk rename batched_gemm_masking_scale_softmax_gemm_permute to batched_gemm_softmax_gemm_permute
      
      * format
      
      * amend rename
      
      * disable bench in test
      
      * add mask/no-mask test for non-permute attention kernels
      
      * disable broken kernel instance
      
      * example working
      
      add non-permuted problem statement
      
      evaluating whether overhead comes from permutation or the extra kernel arg
      
      * interface for bias addition without implementing it
      
      * test and profiler running
      
      * tidy
      
      * mask type determined by enum class
      
      * unify example code
      
      * move masking specialization to its own header
      
      * align formats
      
      * extract helper functions
      
      * experiment merging dims for attn w/ permute; shows perf parity with attn wo/ permute
      
      * add tensor specialization to template args
      
      since tensor spec packed shows perf parity when permutation isn't needed
      
      remove redundant template args
      
      comment on 'packed' tensor specialization
      
      * grouped attention with input/output permute example
      
      * format
      
      * clean up
      
      * refactor acc0 tile visitor
      Co-authored-by: wangshaojie6's avatarshaojiewang <wsjmessi@163.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      de37550f
  14. 25 Oct, 2022 3 commits
    • Qianfeng's avatar
      Update to the Reduction API and instances (#476) · dda3a0a1
      Qianfeng authored
      * Simplify the macros for declaring and defining the add_device_reduce_instance_xxxx() instances
      
      * Change the types of lengths and strides from std::vector to std::array for the reduction device interfaces
      
      * Remove DeviceSoftmaxImpl's depending on DeviceReduceMultiblock
      
      * Split the cpp and hpp files for reduction instances to enable more parallel compiling
      
      * Remove the using of macros for declaring reduction instances and instance references
      
      * Update to add_device_reduce_instance_xxxx templated functions
      
      * Use ReduceOperation+InElementwiseOp+AccElementwiseOp to repace the ReduceOpId in defining add_reduce_instance_xxxx() templates
      
      * Change return format
      dda3a0a1
    • guangzlu's avatar
      Revert "Fused elementwise layernorm (#468)" (#491) · 6ea9257e
      guangzlu authored
      This reverts commit efbcc6ed.
      6ea9257e
    • guangzlu's avatar
      Fused elementwise layernorm (#468) · efbcc6ed
      guangzlu authored
      * add fused addition lyernorm
      
      * add fused addition lyernorm
      
      * changed CMakelist
      
      * removed annotates
      
      * modified descriptor of C
      
      * fixed bug in gridwise add layernorm
      
      * format the files
      
      * modified name from add&layernorm into elementwise&layernorm
      
      * created fused elementwise layernorm branch
      
      * change input into tuple type
      
      * add sweep once to reduce load & read of C from global memory
      
      * modified Argument api
      
      * modified way to malloc c in global memory
      
      * changed gamma and beta to m_k_desc
      
      * fixed bug when sweep once and move CDataType when define device level struct
      
      * add src dim for gamma and beta
      
      * implement optimization for coalesced
      
      * delete a annotation line
      
      * fixed some bug to meet the requirements of ck
      
      * add bandwidth computing in example, and fixed the time unit
      
      * move device_elementwise_layernorm_impl.hpp into device/impl
      
      * fixed bug in device_elementwise_layernorm_impl.hpp
      
      * changed name from layernorm into normalization
      
      * clang-format the changed files
      
      * changed the names
      
      * moved immidiate results into lds, it become faster in non-sweeponce cases
      
      * changed naming of C into X to make the defination more clear
      
      * changed naming in example
      
      * add tests for elementwise normalization
      
      * move example_elementwise_layernorm_blockwise into folder 44_elementwise_normalization
      
      * move test_elementwise_layernorm_fp16 into new folder
      
      * move elementwise_normalization_instances into a new folder
      
      * add more tests in test_elementwise_layernorm_fp16.cpp
      
      * added some corner cases in test
      
      * fixed method to compute lds size for matrix X
      
      * changed name of 44_elementwise_normalization into 45_elementwise_normalization
      
      * modified some comments
      
      * modified some other confused comments
      
      * reduce redundant tests in test_elementwise_layernorm_fp16.cpp
      efbcc6ed
  15. 13 Oct, 2022 2 commits
    • Adam Osewski's avatar
      Refactor device op implementations into `impl` subdirectory. (#420) · 30480288
      Adam Osewski authored
      
      
      * Move kernel implementation files under impl directory.
      
      * Update examples paths.
      
      * Update device kernel impl include paths.
      
      * Update tensor operation instances include paths.
      
      * Update profiler and tests include paths.
      
      * Clang-format
      
      * Update include paths for batched gemm reduce
      
      * Refactor UnitTest ConvNDBwdWeight.
      
      * Refactor fwd and bwd data convND UT.
      
      * Fix used test macro.
      
      * Fix include path.
      
      * Fix include paths.
      
      * Fix include paths in profiler and tests.
      
      * Fix include paths.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      30480288
    • rocking5566's avatar
      Fix bug of layernorm ckProfiler and refine code (#448) · 1b62bfaa
      rocking5566 authored
      * Fix bug of profiler for layernorm
      
      * 1. Rename layernorm into normalization
      2. Decouple softmax from normalization
      
      * clang-format
      1b62bfaa
  16. 11 Oct, 2022 1 commit
    • ltqin's avatar
      Example contraction splitk (#430) · d8b41e1c
      ltqin authored
      * start split k
      
      * add base device class
      
      * add example after merge develop
      
      * add gridwise gemm
      
      * add b matrix split k
      
      * split=1
      
      * change name for kb
      
      * not bias result right
      
      * bias only add once
      
      * fix register spill
      
      * regular code
      
      * add fp32 example
      
      * fix for 64bit index
      
      * fix CheckValidity of gridwise
      d8b41e1c
  17. 07 Oct, 2022 1 commit
    • Shaojie WANG's avatar
      Optimization for gridwise group norm (#453) · 40942b90
      Shaojie WANG authored
      
      
      * use another instance to check the efficiency
      
      * optimize group layer norm
      
      * 1. coalesce load/store data for gridwise layer norm welford. 2. move a sqrt and divison into a outer static loop
      
      * add more instances to layernorm
      
      * add 2 more test cases
      
      * remove ignore in generating tuple of vector
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      40942b90
  18. 22 Sep, 2022 1 commit
  19. 21 Sep, 2022 1 commit
  20. 20 Sep, 2022 5 commits
    • Chao Liu's avatar
      fix build (#427) · 567f70f5
      Chao Liu authored
      * fix build
      
      * fix build
      567f70f5
    • Shaojie WANG's avatar
      MNKO padding support on bmm+masking+scale+softmax+bmm+premute (#425) · ebab84b6
      Shaojie WANG authored
      
      
      * add lower triangle bmm
      
      * init code for tile skipping
      
      * functionality right with lower triangle mask
      
      * add decoder lower triangular mask calculation
      
      * use 7*13 group
      
      * fix n2 compute error
      
      * attention with lower triangle mask with tile skipping
      
      * add template to distinguish masking kernel
      
      * rename template and remove default template value
      
      * remove lower triangle gemm reference struct
      
      * add some comments on example
      
      * add 10 instance for masking bmm + scale + softmax + bmm + permute kernels
      
      * add test
      
      * add test file
      
      * add gtest for bmm masking scale softmax bmm permute
      
      * clang-format
      
      * fix compile error
      
      * check lef bottom corner for tile skipping
      
      * fix error: check left bottom corner for tile skipping
      
      * add k padding
      
      * add test and instance for MNK padding
      
      * passing a mask struct
      
      * fix instances
      
      * delete used comments
      
      * format
      Co-authored-by: default avatardanyao12 <yaodan@dc-smc-13.amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      ebab84b6
    • rocking5566's avatar
      Group norm (#417) · 4eba345f
      rocking5566 authored
      
      
      * Add groupnorm example by layernorm
      1.  Reference is not ready
      2. shape of gamma and beta need to be fix
      
      * Let shape of gamma and beta can be same as x
      
      * Modify test, instance and client example
      
      * [What] Fix bug of layernorm for greater than 2 dimension.
      [Why] We need to get upper length from merge transform instead of embed transform.
      
      * Add reference for groupnorm
      
      * Fuse sigmoid after groupnorm
      
      * [What] Rename original layernorm into layernorm2d
      [Why] Prepare to add groupnorm using layernorm5d
      
      * clang-format
      
      * Add groupnorm test
      
      * Refine error message
      
      * Add groupnorm ckProfiler
      
      * Test groupnorm kernel from device_instance
      
      * update example
      
      * upadte profiler
      
      * Fix test naming
      
      * Fix argc number
      
      * Move descriptor and sweeponce to argument for quick debugging
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      4eba345f
    • Po Yen Chen's avatar
      Add 'Permute' device op & example (#408) · f584ab0c
      Po Yen Chen authored
      * Add example folder for 'DeviceElementwise'
      
      * Re-structure example files
      
      * Move common parts into common.hpp
      
      * Use more strict input
      
      * Add more helper methods in 'DeviceElementwise'
      
      * Use more specific method to write example
      
      * Allow specify problem through command line argument
      
      * Allow specify problem 'axes' through command line argument
      
      * Add check to template type argument
      
      * Add transpose_shape() to generalize shape permute
      
      * Generalize transpose utility functions
      
      * Use better name for tensor indices
      
      * Add checks in helper functions
      
      * Remove debug messages
      
      * Refine error message for check_err()
      
      * Generalize variable naming in example code
      
      * Add device op 'DevicePermute'
      
      This device op is clone of 'DeviceElementwise'
      
      * Use 'DevicePermute' device op in example
      
      * Remove 'elementwise' from identifiers
      
      * Remove 'elementwise' from file paths
      
      * Remove base class of 'DevicePermute'
      
      * Let 'DevicePermute' inherit from 'BaseOperator'
      
      * Add simple type traits to validate device op type
      
      * Add static_assert() to check type constraints
      
      * Create 'DevicePermuteBase' to generate methods
      
      * Use indirect base type to generate methods
      
      * Remove 'is_device_op<>' type traits
      
      * Only accept single-input-single-output for 'DervicePermute'
      
      * Simplify 'DevicePermute' interface
      
      * Re-format 'DeviceElementwise'
      
      * Use CRTP to generate overridden virtual method
      
      * Remove unnecessary include directives
      
      * Distinguish input & output shape in 'DevicePermute'
      
      * Passing 'axes' to 'DevicePermute'
      
      * Use more reasonable return value for Invoker::Run()
      
      * Add 'GridwisePermute' kernel
      
      This kernel is a clone of 'GridwiseElementwise_1D'
      
      * Remove no-longer used type argument
      
      * Check if input/output shape meet the requirement
      
      * Remove no-longer used method
      
      * Remove never-entered-if-clause
      
      * Change problem description for 'DevicePermute'
      
      * Transform descriptor into 3 dimensions
      
      * Add debug code the verify result
      
      * Add comment to indicate template argument location
      
      * Add N/H/WPerBlock template parameter to 'DevicePermute'
      
      * Rename 'GridwisePermute' to 'GridwiseCopy'
      
      * Check tensor descriptor dimensions in 'GridwiseElementwise_1D'
      
      * Add missing include directive
      
      * Add 'BlockSize' parameter to 'DevicePermute'
      
      * Remove no-longer used method
      
      * Add 'BlockToTileMap' for 'GridwiseCopy'
      
      * Use the normal Block2TileMap convention
      
      * Rename 'BlockToTileMap' as 'Block2TileMap'
      
      * Fix most of compilation errors
      
      * Let 'Block2TileMap' map block to 2d coordinate
      
      * Allow data transfer in 'GridwiseCopy'
      
      * Fix wrong output descriptor for 2nd blockwise copy
      
      * Rename 'GridwiseCopy' as 'GridwisePermute'
      
      * Remove '1d' in identifiers
      
      * Remove commented-out codes
      
      * Remove 'MPerThread' template parameter
      
      * Seperate template parameters
      
      * Unify variable namming convention
      
      * Use more verbose way to create expressions
      
      * Add template parameter 'InBlockLdsExtraW'
      
      * Release the constraint on In/OutGridDesc
      
      * Use date type directly as template argument
      
      * Re-arrange template arguments for blockwise copy
      
      * Remove no-longer used template parameters
      
      * Embed layout in the variable names
      
      * Add GridwisePermute::CheckValidity()
      
      * Extract local types as template parameters
      
      * Rename local type alias
      
      * Add more template parameters (vector width related)
      
      * Calculate new SrcVectorDim/DstVectorDim after merge descriptor dimensions
      
      * Fill tensor values start from 1
      
      * Re-formate example code
      
      * Avoid too-large block id
      
      * Add comment
      
      * Make sure 'SrcVectorDim' is not same as 'DstVectorDim'
      
      * Add check for the 'VectorDim' & 'ScalarPerVector' template params
      
      * Let 'DstVectorDim' equals 'SrcVectorDim' after transpose out grid desc
      
      * Remove no-longer used template parameter 'NPerBlock'
      
      * Fix wrong descriptor creation logics
      
      * Specify problem in each examples
      
      * Use better example name
      
      * Add new example 'example_permute_NxHxW_fp32'
      
      * Add example for demonstrating bundle multiple elems in tensor
      
      * Add support to permute multiple elements together
      
      * Change the default problem size
      
      * Add span<> class template
      
      * Use span<> to generalize check_err() interface
      
      * Fix ambiguous ctor call
      
      * Avoid create necessary objects
      
      * Use helper functions to simplify example code
      
      * Add example for 4xfp16 permute
      
      * Disable failed-to-compile example
      
      * Add check for the NUM_ELEMS_IN_BUNDLE
      
      * Remove redundant parameter in helper lambda function
      
      * Add check for the input tensor type's byte-size
      
      * Check scalar-per-vector with padded length
      
      * Use more verbose name to avoid name collision
      
      * Use fixed 'VectorDim' & 'ScalarPerVector' for LDS
      
      * Embed shape info in name of descriptor constructor
      
      * Rename example folder '36_permute' into '37_permute'
      
      * Avoid using too-large LDS in kernel code
      
      * Remove redundant example
      
      * Usw switch() to group similar codes
      
      * Add const to the span<> type arguement
      
      * Simply initialize tensor with floating point values
      
      * Use fp16 as data type in all examples
      
      * Enlarge tensor size in example
      
      * Enalrge N-dim in example
      
      * Add check for the bundled type in example
      
      * Use more stricter error threshold
      
      * Remove global load/store loop in kernel code
      
      * Measure execution time by default
      
      * Use faster device op config for example 'NxHxW_fp16'
      
      * Use faster device op config for example '1xHxW_fp16'
      
      * Use faster device op config for example 'HxWx4_fp16'
      
      * Remove cmd arg parsing logics
      
      * Rename functions
      
      * Extract bundle permutation logic out
      
      * Simplify permute bundle example
      
      * Add Tensor<>::GetElementSpaceSizeInBytes()
      
      * Add Tensor<>::data()
      
      * Use new methods to simplify code
      
      * Use type alias to replace duplicated code
      
      * Use existing method to shorten code
      
      * Allow FillUniformDistribution accept range arugment
      
      * Intialize random values in range
      
      * Add Tensor<>::size()
      
      * Use more meaningful names in permute bundle example
      
      * Use more meaningful names in permute element examples
      
      * Use rangified copy() to copy elements
      
      * Use function return value directly to eliminate variables
      
      * Add to_array() conversion tool to eliminate more variables
      
      * Add Tensor<>::AsSpan<>() to create view of tensor values
      
      * Use AsSpan() to shorten check_err() calls
      
      * Remove no-longer-used 'using' directives
      
      * Move 'using' directive to proper code position
      
      * Remove redudant variables
      
      * Remove useless static_assert()
      
      * Add check for range types
      
      * Declare variable right before first use
      
      * Move long return type as tailing return type
      
      * Add BaseInvokerCRTP<> class template to generate method
      
      * Create new base type for 'DervicePermute' implementations
      
      * Move 'NumDim' template param to the first
      
      * Rename 'DevicePermute' to 'DevicePermuteImpl'
      
      * Add 'noexcept' specifier to CRTP generated method
      
      * Move 'Block2TileMap' definition into 'GridwisePermute'
      
      * Use type alias to reduce code
      
      * Unify naming style in 'DevicePermute'
      
      * Add comments in 'GridwisePermute'
      
      * Rename permute example folder
      
      * Use std::cerr to report error
      
      * Use larger shape in examples
      
      * Rename '38_permute' to '39_permute'
      
      * Make sure we use unsigned type for shape & indices
      
      * Remove opt-ed out assertion
      
      * Remove template BaseInvokerCRTP<>
      f584ab0c
    • Anthony Chang's avatar
      Add batched attention special kernel instances (#424) · 7c788e10
      Anthony Chang authored
      * sanity check
      
      * add attribution
      
      * add irrgular k tile size for batched attention
      
      * format
      7c788e10