1. 04 Mar, 2022 4 commits
    • rocking5566's avatar
      [Bf16 & int8] [example & ckprofiler] (#100) · 7e9a9d32
      rocking5566 authored
      
      
      * Add int8 of mk_nk_mn to the ckProfiler
      
      * Add example of int8 gemm
      
      * Fix typo, use ushort instead of half_t for bfloat16
      
      * replace ushortXXX_t to bhalfXXX_t
      
      * rename ushort to bhalf_t
      
      * Add bf16 example
      
      * Add bf16 gemm to ckProfiler
      
      * Fix alignment
      
      * Fix typo
      
      * Add unit test for gemm_xdl int8
      
      * Add gemm_xdl fp32 unit test
      
      * Add gemm_xdl bf16 unit test
      
      * fix build
      
      * fix build issue due to merge conflict
      
      * Fix build
      
      * Fix build error
      Co-authored-by: default avatarrocking <chunylai@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      7e9a9d32
    • Chao Liu's avatar
      fix type in PR #101 (#107) · 0c79af12
      Chao Liu authored
      0c79af12
    • Jianfeng Yan's avatar
      Refactor threadwise copy using sfcurve (#101) · 0619ebf7
      Jianfeng Yan authored
      
      
      * add space_filling_curve
      
      * cleanup and move space_filling_curve into test
      
      * WIP: start refactoring threadwise_transfer_v1r3
      
      * threadwise_copy works but needs further refactoring
      
      * add some comments
      
      * add SpaceFillingCurve::GetIndices()
      
      * minor changes
      
      * removed GetIndices; refactored GetDstCoordinateResetStep
      
      * add DynamicBuffer::Transfer, but Add is not tested
      
      * rebased agaist develop
      
      * threadwise_copy_v6r1/v6r2/v6r3 using space-filling curve start to work
      
      * minor changes
      
      * refactored threadcopy v3r1, v2; removed old implementations
      
      * clang-format
      
      * cleanup
      
      * fix a typo in v6r3
      
      * format
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      0619ebf7
    • ltqin's avatar
      NHWC conv 2d: bwd fp32/fp16/bfp16/int8, Device level tuning and host API (#92) · c254e5ab
      ltqin authored
      
      
      * start conv2d bwd api
      
      * kernel running
      
      * add bwd reference
      
      * change to no shuffle
      
      * fix bwd reference
      
      * pass verification
      
      * add Filter1x1Stride1Pad0 and start testing
      
      * change some tuning parameter
      
      * fix test error
      
      * add fp16 tuning parameter
      
      * add bf16 tuning parameter
      
      * add int8 tuning parameters
      
      * change fp32 tuning parameter
      
      * add bwd to profiler
      
      * fix bug for bwd profiler
      
      * fix ckProfiler bug
      
      * change conv2d_bwd_xdl to fp16
      
      * fix bug in comments
      
      * fix precompile id
      
      * fix enum conv name
      
      * chage _bwd_ to _bwd_data_
      
      * change conv2d_bwd example id
      
      * bwd to bwd data
      
      * fix prehead
      
      * fix MakeDefaultBlock2CTileMap ,import form merge develop
      
      * format bwd instance
      
      * bwd to bwd data
      
      * change name bwd to bwd data
      
      * change name bwd to bwd data in example
      
      * formate code
      
      * change conv2d bwd data id in example
      
      * rewrite readme for example
      
      * fix CalculateMagicNumbers about div zero
      
      * add workaround CK_WORKAROUND_SWDEV_325164
      
      * change test_conf2d_bwd_data show info
      
      * format
      
      * fix bug for workaround:CK_WORKAROUND_SWDEV_325164
      
      * formate tuning parameters
      
      * formate tuning parameters again
      
      * formate tuning parameters 3
      
      * formate tuning parameters 4
      
      * remove add function template
      
      * format
      
      * update comment
      Co-authored-by: default avatarltqin <letaoqin@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      c254e5ab
  2. 28 Feb, 2022 1 commit
    • Anthony Chang's avatar
      Allow distinct K0/K1 values for A/B block descriptor (#98) · 6d4450ef
      Anthony Chang authored
      
      
      * add gitignore
      
      * host tensor: allow generating sequentially increasing value in a given dimension
      
      * gridwise gemm v3r1: allow distinct K0/K1 values for A/B block descriptor
      
      - remove dangling header include
      - modify example gemm_xdl accordingly
      - infer KPack value from M/NPerXdl
      - device conv2d fwd: update parameters accordingly for the underlying gridwise gemm v3r1
      (API for conv2d fwd stays the same for now until we decide to expose individual K0s for activation and weight)
      
      * add LDS data dump utility
      
      * profiler: reflect API change for distinct K0/K1 for A/B matrices
      
      * profiler: add conflict-free LDS write FP16 kernel instances
      
      * fix accidental perf regression
      
      * address feedback; cosmetic changes
      
      * clang-format for new files
      
      * format
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      6d4450ef
  3. 25 Feb, 2022 2 commits
    • zjing14's avatar
      Split k f16 (#97) · e221d11e
      zjing14 authored
      
      
      * init for splitk f16
      
      * a working prototype
      
      * debug
      
      * perf debug
      
      * update example
      
      * instances for mk kn
      
      * add instances for all layers
      
      * clean
      
      * clean
      
      * add tuning
      
      * format
      
      * add mn_padding into irregular tile
      
      * clean
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      e221d11e
    • Jianfeng Yan's avatar
      Space filling curve (#96) · bdedf64b
      Jianfeng Yan authored
      * add space_filling_curve
      
      * cleanup and move space_filling_curve into test
      
      * add functions for backward and forward step; hard coded results in unit test
      
      * minor changes
      bdedf64b
  4. 23 Feb, 2022 3 commits
    • Chao Liu's avatar
      Add gridwise GEMM pipeline (#89) · 22d438ae
      Chao Liu authored
      * clean up
      
      * add mutilple thread scratch to ThreadwiseTensorSliceTransfer_v3r1
      
      * add 2 stage prefetch
      
      * add more sanity check into transform_tensor_descriptor
      
      * tweak
      
      * enabling 2 stage prefetch to exsiting gridwise gemm; tweak
      
      * enabling 2 stage prefetch to exsiting gridwise gemm
      
      * move gridwise gemm pipeline in class; clean up
      
      * add some irregular tile size
      
      * update CalculateHasMainK0BlockLoop for multi-stage-prefetch
      
      * refactor gridwise gemm pipeline class
      22d438ae
    • Adam Osewski's avatar
      Unify Convolution FWD XDL 1D/2D implementation. (#93) · 756a7617
      Adam Osewski authored
      
      
      * Convolution ND
      
      * Code unification across dimensions for generating tensor descriptors.
      * Example
      * Instances
      
      * Move convnd f32 instance file to comply with repo structure.
      
      * Conv 1D tensor layouts.
      
      * Formatting and use ReferenceConv
      
      * Reference ConvFwd supporting 1D and 2D convolution.
      
      * Debug printing TensorLayout name.
      
      * Conv fwd 1D instance f32
      
      * Refactor conv ND example.
      
      Needed to support various conv dimensio.
      
      Needed to support various conv dimensions
      
      * Rename conv nd example director to prevent conflicts.
      
      * Refactor some common utility to single file.
      
      Plus some tests.
      
      * Refactor GetHostTensorDescriptor + UT.
      
      * Add 1D test case.
      
      * Test reference convolution 1d/2d
      
      * Remove some leftovers.
      
      * Fix convolution example error for 1D
      
      * Refactor test check errors utility function.
      
      * Test Conv2D Fwd XDL
      
      * More UT for 1D case.
      
      * Parameterize input & weight initializers.
      
      * Rename example to prevent conflicts.
      
      * Split convnd instance into separate files for 1d/2d
      
      * Address review comments.
      
      * Fix data type for flops/gbytes calculations.
      
      * Assign example number 11.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      756a7617
    • Jianfeng Yan's avatar
      Conv3d new (#94) · 6dfb92bb
      Jianfeng Yan authored
      
      
      * conv3d compiles but has memory error
      
      * conv3d works
      
      * fix performance issue by using __builtin_amdgc_readfirstlane
      
      * change MakeBlock2CTileMap to MakeDefaultBlock2CTileMap; change c_blockid_to* to cblockid_to*
      
      * clang-format
      
      * remove CK_EXPERIMENTAL_PASS_TENSOR_DECRIPTOR_BY_*; moved wrapper into DeviceConv3d
      
      * format
      
      * remove useless marc
      
      * add comment
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      6dfb92bb
  5. 19 Feb, 2022 1 commit
    • JD's avatar
      Initial Setup for CI (#86) · 2778e997
      JD authored
      
      
      * add docker file and make default target buildable
      
      * add Jenkinsfile
      
      * remove empty env block
      
      * fix package stage
      
      * remove render group from docker run
      
      * clean up Jenkins file
      
      * add cppcheck as dev dependency
      
      * update cmake file
      
      * Add profiler build stage
      
      * add hip_version config file for reduction operator
      
      * correct jenkins var name
      
      * Build release instead of debug
      
      * clean up
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      2778e997
  6. 12 Feb, 2022 1 commit
    • ltqin's avatar
      NHWC conv 2d: fwd bfp16/int8, Device level tuning and host API (#73) · 880fbee9
      ltqin authored
      
      
      * add fwd bf16 conv
      
      * change tunning parametor
      
      * add int8 for conv fwd
      
      * remove comments
      
      * change tunning parametor for int8
      
      * change init int8 example
      
      * add test for conv2d fwd
      
      * change device operation file pos because merge develop
      
      * fwd int8 use reference
      
      * test_conv_fwd use reference
      
      * add braket for if statement
      
      * rename fwd example name
      
      * remove StaticBufferOfVectorTypeV2
      
      * tweak example
      Co-authored-by: default avatarltqin <letaoqin@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      880fbee9
  7. 11 Feb, 2022 2 commits
    • zjing14's avatar
      Batched GEMM for fp16 (#79) · b53e9d08
      zjing14 authored
      * prepare host for batched_gemm
      
      * init commit of batched kernels
      
      * fixed
      
      * refine transform with freeze
      
      * m/n padding
      
      * fixed a bug; clean
      
      * add small tiles
      
      * clean
      
      * clean code
      
      * clean code
      
      * add nt, tn, tt layout
      
      * add missing file
      
      * use StaticBufferTupleOfVector instead
      
      * add reference_batched_gemm
      
      * fixed a macro
      b53e9d08
    • rocking5566's avatar
      Support alpha beta scaling for GEMM (#78) · 6f928a08
      rocking5566 authored
      
      
      * [What] Add 2d version of bias, prepare to implement alpha / beta scaling
      
      * Add alpha / beta functor
      
      * Refine parameter of example
      
      * [What] Use real type instead of template
      [Why] Prevent implicit cast
      
      * Rename parameter for general operator
      
      * Remove redundant comment
      
      * Fix compile error
      Co-authored-by: default avatarrocking <chunylai@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      6f928a08
  8. 07 Feb, 2022 1 commit
    • Chao Liu's avatar
      GEMM+Bias+ReLU+Add (#76) · 823657ed
      Chao Liu authored
      * tweak conv for odd C
      
      * update script
      
      * clean up elementwise op
      
      * fix build
      
      * clean up
      
      * added example for gemm+bias+relu+add
      
      * added example for gemm+bias+relu
      
      * add profiler for gemm_s_shuffle; re-org files
      
      * add profiler
      
      * fix build
      
      * clean up
      
      * clean up
      
      * clean up
      
      * fix build
      823657ed
  9. 03 Feb, 2022 2 commits
    • zjing14's avatar
      Replace llvm Intrinsics with clang buildins (#65) · 6d92959a
      zjing14 authored
      * test mfma builtins
      
      * add fp16 buildins
      
      * add int8 buildins
      
      * add bfl16 buildins
      
      * simplify host conv forward
      
      * clean
      
      * clean
      6d92959a
    • ltqin's avatar
      add split-k GEMM (#59) · 4be7f019
      ltqin authored
      
      
      * add DeviceGemmSplitKXdl
      
      * add file device_gemm_splitk_xdl.hpp
      
      * set c matrix zero
      
      * using atomic
      
      * add all tuning parameter to f32 mkkn
      
      * grid size change to 720
      
      * add tunning parameter for NT
      
      * add tunning parameter for TN
      
      * add tunning parameter for TT
      
      * add m=96tunning parameter
      
      * add lost config
      
      * add element wise operation
      
      * fixed MPerBlock=96
      
      * remove marco for slpitk swtich
      
      * add test
      
      * add new line at the end of device_gemm_xdl_instance.hpp
      
      * remove step hack
      
      * seperate split-k instance files
      
      * add tunning parameters
      
      * change disired grid size to parameters
      
      * remove slice length
      
      * add desiredgridsize parameter to ckProfiler
      
      * add losting file device_gemm_xdl_splitk_instance.hpp
      
      * change desired gride size to kbatch
      
      * format
      
      * format
      
      * clean up
      
      * add selection of device_instances
      
      * clean code
      
      * fix build issue
      Co-authored-by: default avatarltqin <letaoqin@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarJing Zhang <jizhan@amd.com>
      4be7f019
  10. 26 Dec, 2021 1 commit
    • Chao Liu's avatar
      Fusion Conv+Bias+ReLU(+Add) (#62) · acbd7bd7
      Chao Liu authored
      * fix relu
      
      * clean up
      
      * clean up
      
      * adding 1x1 conv
      
      * adding 1x1 conv
      
      * added 1x1 conv
      
      * refactor
      
      * refactor
      
      * refactor
      
      * added profiler for conv+bias+relu+add
      
      * clean up
      
      * adding conv+bias+relu
      
      * adding conv+bias+relu
      
      * added conv+bias+relu
      
      * Update README.md
      
      * update cpu verification
      
      * adding c shuffle
      
      * update static_tensor for dealing with invalid element
      
      * adding c shuffle
      
      * debugging
      
      * fix bug
      
      * convert to fp16 before shuffle
      
      * shuffle more than one M/NRepeat
      
      * clean up
      
      * remove coordinate step hack from GridwiseGemm_k0mk1_k0nk1_mn_xdlops_v3r1
      
      * clean up
      
      * remove coordinate step hack from all gridwise gemm xdl
      
      * clean up coordinate step hack
      
      * clean up coordinate step hack
      
      * ThreadwiseTensorSliceTransfer_v3r2 support pointwise op on both src and dst
      
      * adding output shuffle in conv+bias+relu+add
      
      * update
      
      * added conv+bias+relu+add with c shuffle
      
      * added conv+bias+relu+add with c shuffle
      
      * fix forward_sweep bugs in threadwise copy
      
      * clean up
      
      * refactor
      
      * clean up
      
      * clean up
      
      * added conv_c_shuffle+bias_relu
      
      * clean up
      
      * added conv+bias+relu+atomic_add
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * misc fixes; add 1x1 specialization
      
      * clean up
      
      * delete unused device op
      
      * clean up
      
      * add support for odd C value
      acbd7bd7
  11. 13 Dec, 2021 1 commit
    • Chao Liu's avatar
      manually apply bug fix changes in pr #63 (#64) · a4f24233
      Chao Liu authored
      * Bug in BlockwiseGemmXdlops_k0mk1_k0nk1_m0n0m1n1m2m3m4n2_v1::MakeCGridDescriptor_M0_N0_M1_N1_M2_M3_M4_N2()
      * Bug in ThreadwiseTensorSliceTransfer_v1r3 logic for calculating "forward_sweep"
      a4f24233
  12. 03 Dec, 2021 1 commit
    • Chao Liu's avatar
      GEMM/Conv+BiasAdd+ReLU+Add (#55) · 41cdd380
      Chao Liu authored
      * gemm+activation
      
      * move C pointwise operation into threadwise copy
      
      * add pointwise operation to A/B matrix
      
      * update ckProfiler
      
      * adding bias add
      
      * adding bias add
      
      * adding bias add
      
      * added bias add; worked around compiler issues
      
      * clean up
      
      * clean up
      
      * Update README.md
      
      * Update README.md
      
      * Update README.md
      
      * clean up
      
      * add conv_xdl example
      
      * adding conv_xdl_bias_relu_add example
      
      * add conv+bias+relu+add, but has register spill issue
      
      * tweak
      
      * tweak
      
      * refactor
      
      * Update README.md
      
      update readme for example/2_gemm_xdl_bias_relu_add
      
      * clean up
      
      * Update README.md
      
      update readme for example/3_conv_xdl
      
      * Update README.md
      41cdd380
  13. 02 Dec, 2021 3 commits
  14. 18 Nov, 2021 2 commits
    • Chao Liu's avatar
      Use __builtin_memcpy to implement bit_cast and for accessing vector from pointer of scalars (#53) · 64350aff
      Chao Liu authored
      * reworking vector_type
      
      * use __builtin_memcpy for bit_cast and vector access of scalar pointer
      
      * clean up
      64350aff
    • zjing14's avatar
      v5r1 fusion kernels for inference (#49) · 970fa3e9
      zjing14 authored
      
      
      * init
      
      * refactor for 1x1
      
      * rename e0_e1
      
      * add e1 with bugs
      
      * debug
      
      * fixed
      
      * fixed e1
      
      * add timer
      
      * imprve threadwise gemm with dot2
      
      * add e2
      
      * tuning
      
      * seperate c2
      
      * add nhwc
      
      * restore nchwc
      
      * clean
      
      * opt
      
      * fixed; tuning
      
      * add BGlobalMoveSliceWindowStepHacks{}
      
      * tuning
      
      * repeat running
      
      * adjust
      
      * merge v5r1 nchwc
      
      * add adaptors
      
      * split k0 k1 in c_thread_grid
      
      * split h and w
      
      * remove v5r1 nhwc
      
      * clean for pr
      
      * remove host_conv_add
      
      * clean code
      
      * clean
      
      * add dynamic support
      
      * static mode
      
      * test static
      
      * add conv+add fusion
      
      * fixed validation
      
      * naming fix
      
      * use activ_enum
      
      * make static
      
      * refactor conv_add for InMem::add
      
      * add bias
      
      * add conv_out
      
      * add configurable makeddesc
      
      * add maxpool fusion
      
      * add maxpool host for validation
      
      * enable static desc
      
      * conv-only use v5r1_add
      
      * test
      
      * test
      
      * for binary dumps
      
      * fixed incorrect results due to typo
      
      * clean
      
      * debugging maxpool
      
      * workaround with offset trick
      
      * clean code
      
      * modularize ops of fusion
      
      * add gridwise_gemm_v3
      
      * create seperate fusion fun
      
      * enable dynamic mode of conv and conv+resize_add
      
      * add dynamic mode of maxpool
      
      * add pass by point
      
      * add activ_type as arguments
      
      * merge develop
      
      * clean
      
      * reset config to old default
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      970fa3e9
  15. 16 Nov, 2021 2 commits
  16. 15 Nov, 2021 2 commits
    • zjing14's avatar
      Add bfp16/int8 support into XDL GEMM operator (#50) · 3737bb03
      zjing14 authored
      
      
      * init StaticBufferV2
      
      * clean
      
      * adopt old output stage for staticBufferV2
      
      * clean
      
      * remove hack
      
      * clean
      
      * clean
      
      * add parameters
      
      * clean code
      
      * move c_buffer alloc into blockwise gemm
      
      * add adaptors for m/n_thread_data_on_grid
      
      * tweak gemm
      
      * adjust blockwise_gemm_xdlops
      
      * tweak
      
      * update conv
      
      * update script
      
      * adding bwd 1x1
      
      * update script
      
      * adding 1x1 bwd
      
      * debugging bwd 1x1 failure
      
      * update script
      
      * update script
      
      * test
      
      * test v100
      
      * add bf16_1k
      
      * clang-format
      
      * clean
      
      * add bfp16 for gfx908
      
      * add verification
      
      * clean up
      
      * clean code
      
      * restore bfl16
      
      * clean
      
      * add bfp16 support into gemm_driver
      
      * apply new generator to other drivers
      
      * add int8 support
      
      * cleanb
      
      * clean
      
      * clean
      
      * clean
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarChao Liu <lc.roy86@gmail.com>
      Co-authored-by: default avatarroot <root@hayabusa6111.amd.com>
      3737bb03
    • Chao Liu's avatar
      FP16 data in-register transpose (#41) · b491ebf3
      Chao Liu authored
      * start fixing 16bit data packing
      
      * adding StaticTensor
      
      * adding StaticTensor
      
      * adding StaticTensor
      
      * add missing constexpr
      
      * adding static tensor
      
      * adding static tensor
      
      * adding transpose
      
      * add inline asm for transpose 2x2 of half_t
      
      * add general transpose_vectors(), but have unnecessary register initialization using v_mov
      
      * fix unnecessary register initialization in transpose_vector by using more pass-by-reference
      
      * add hardcoded logic for NHWC wrw
      
      * improve asm for v_pack
      
      * make ThreadwiseTensorSliceTransfer_v3r2 support any tensor
      
      * tweak
      
      * reorganize file
      b491ebf3
  17. 14 Nov, 2021 1 commit
    • Chao Liu's avatar
      ckProfiler and device-level XDL GEMM operator (#48) · e823d518
      Chao Liu authored
      * add DeviceGemmXdl
      
      * update script
      
      * fix naming issue
      
      * fix comment
      
      * output HostTensorDescriptor
      
      * rename
      
      * padded GEMM for fwd v4r4r4 nhwc
      
      * refactor
      
      * refactor
      
      * refactor
      
      * adding ckProfiler
      
      * adding ckProfiler
      
      * refactor
      
      * fix tuning parameter bug
      
      * add more gemm instances
      
      * add more fp16 GEMM instances
      
      * fix profiler driver
      
      * fix bug in tuning parameter
      
      * add fp32 gemm instances
      
      * small fix
      
      * refactor
      
      * rename
      
      * refactor gemm profiler; adding DeviceConv and conv profiler
      
      * refactor
      
      * fix
      
      * add conv profiler
      
      * refactor
      
      * adding more GEMM and Conv instance
      
      * Create README.md
      
      Add build instruction for ckProfiler
      
      * Create README.md
      
      Add Readme for gemm_xdl example
      
      * Update README.md
      
      Remove build instruction from top most folder
      
      * Update README.md
      
      * clean up
      e823d518
  18. 27 Oct, 2021 1 commit
  19. 21 Oct, 2021 1 commit
  20. 19 Oct, 2021 2 commits
    • Chao Liu's avatar
      bug fix (#39) · c3018794
      Chao Liu authored
      c3018794
    • ltqin's avatar
      add nchw atomic , nhwc and nhwc atomic method for backward weight (#30) · fd49ff80
      ltqin authored
      
      
      * add add new algorithm from v4r4r2
      
      * program once issue
      
      * add split k functiion
      
      * redefine code
      
      * add a matrix unmerge
      
      * add b matrix unmerge k0
      
      * trans a and b to gridegemm
      
      * nhwc init
      
      * no hacks and vector load
      
      * add hacks
      
      * modify some parameter
      
      * fix tuning prometer for fp32
      
      * fix tuning prometer for fp16
      
      * start change gridwise k split
      
      * init ok
      
      * revome a b matrix k0mk1 desc in grid
      
      * carewrite lculate gridsize
      
      * add kbatch to CalculateBottomIndex
      
      * remove some unused funtion
      
      * add clear data function before call kernel
      
      * out hacks
      
      * in hacks
      
      * rename device convolution file and function name
      
      * modify kBatch value
      
      * fix some tuning code
      
      * start from v4r4 nhwc
      
      * nhwc atomic is able to run
      
      * just for fp32
      
      * enable nchw atomic
      
      * tweak
      
      * tweak
      
      * re-arrange gridwise gemm hot loop for wrw
      
      * add wrw v4r5
      
      * v4r4r5 fp16
      
      * v4r4r4 fp16
      
      * v4r4r2 fp16
      
      * V4R4R4XDLNHWC fp16
      
      * V4R4R2XDLATOMICNCHW fp16
      
      * adjust for fp16
      
      * input gridsize
      
      * change kbatch to gridsize
      
      * testing wrw
      
      * clean up
      
      * k_batch to gridsize
      
      * fix bug
      
      * wrw v4r4r4 kbatch change to gride size
      
      * wrw v4r4r2 kbatch change to gride size
      
      * after merge , change gridwise gemm v2r4
      
      * change MakeCBlockClusterAdaptor
      
      * other method use new gridwise gemm
      
      * clean up
      
      * chapad method nge to make_right_pad_transform
      
      * kbatch out from transform function
      
      * clean up and fix bug
      
      * fix bug
      
      * using function type reduce template parameters
      
      * using auto replace define fuction type
      
      * clean up
      Co-authored-by: default avatarltqin <letaoqin@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarJing Zhang <jizhan@amd.com>
      fd49ff80
  21. 06 Oct, 2021 3 commits
    • Qianfeng's avatar
      [MIOpen Downstream] Fix Reduction Kernel (#34) · b2dc55f8
      Qianfeng authored
      
      
      * Tiny fix in using data type template parameters in blockwise and direct_threadwise kernel
      
      * Fix with regard to implementing GetZeroVal() in both kernel and host
      
      * Avoid convert to compType from dstDataType before writting the output value
      
      * Add half_t support to NumericLimits and make constexpr GetZeroVal() of binary operator
      
      * Add CONSTANT decorator for descriptor read buffer
      
      * Use get_thread_local_1d_id() for thread local Id
      
      * Rename GetZeroVal() to GetReductionZeroVal() in the kernels
      
      * Remove constexpr from initialized zeroVal and tiny fix in reduction_operator.hpp
      
      * Occasional tiny simplification and update in the kernel files
      
      * Update to re-order tensor dimensions on the host, split second_call kernel wrapper files and simplify reduce_all kernel wrappers
      
      * Update to remove OpenCL tidy checking failures
      
      * Update for better readability
      
      * Remove unused codes and not-needed template parameters in the kernel wrappers
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      b2dc55f8
    • Chao Liu's avatar
      Tweak GEMM kernel (#38) · b3e8d57d
      Chao Liu authored
      * add parameters
      
      * tweak gemm
      
      * tweak
      
      * update conv
      
      * update script
      
      * adding bwd 1x1
      
      * update script
      
      * adding 1x1 bwd
      
      * debugging bwd 1x1 failure
      
      * update script
      
      * update script
      
      * test
      
      * test v100
      
      * clean up
      b3e8d57d
    • zjing14's avatar
      Add VectorType support into StaticBuffer (#27) · 846f462b
      zjing14 authored
      
      
      * init StaticBufferV2
      
      * clean
      
      * adopt old output stage for staticBufferV2
      
      * clean
      
      * remove hack
      
      * clean
      
      * clean
      
      * clean code
      
      * move c_buffer alloc into blockwise gemm
      
      * add adaptors for m/n_thread_data_on_grid
      
      * adjust blockwise_gemm_xdlops
      
      * reorder ops in GEMM hot loop
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      846f462b
  22. 29 Sep, 2021 1 commit
    • Qianfeng's avatar
      [Enhancements] Several bugfixes and refactoring of dynamic generic reduction (#1156) · dfb80c4e
      Qianfeng authored
      * Squashed 'src/composable_kernel/' content from commit f6edda61
      
      git-subtree-dir: src/composable_kernel
      git-subtree-split: f6edda61
      
      * add solver ConvIgemmFwdV6r1DlopsNchwKcyxNkhw; rename static ck source files
      
      * Squashed 'src/composable_kernel/' changes from f6edda61..5781adf5
      
      5781adf5 Update develop (#5) (#6)
      97e6d514 Merge pull request #4 from ROCmSoftwarePlatform/separate_online_compile
      7b1ec41e refactor
      49c33aae refactor
      54b3e73d rename
      
      git-subtree-dir: src/composable_kernel
      git-subtree-split: 5781adf5
      
      
      
      * fix
      
      * refactor
      
      * remove online compilation from CK
      
      * refactor
      
      * fix
      
      * add ctest
      
      * tidy
      
      * add tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * tidy
      
      * add c-style pointer cast
      
      * vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast
      
      * fix clang warning suppression
      
      * tidy
      
      * suppress cppcheck
      
      * fix enum issue
      
      * revert chagnes to hip build
      
      * fix kernel filename
      
      * update CK build script
      
      * rename
      
      * rename
      
      * make innner product compatiable on gfx900
      
      * Update src/include/miopen/solver/ck_utility_common.hpp
      Co-authored-by: default avatarJD <Jehandad.Khan@amd.com>
      
      * compiler parameter use stream
      
      * use int instead of index_t in kernel wrapper
      
      * DynamicBuffer, StaticBuffer, amd_buffer_load support customized value for invalid element
      
      * refactor
      
      * refactor
      
      * change cmakelist
      
      * change ck common utility
      
      * fix
      
      * Squashed 'src/composable_kernel/' changes from 5781adf5..31b40352
      
      31b40352 Merge pull request #16 from ROCmSoftwarePlatform/develop
      b62bf8c3 Merge pull request #14 from ROCmSoftwarePlatform/miopen_downstream_init_integration
      ccc4a1d3 Merge pull request #8 from ROCmSoftwarePlatform/miopen_downstream_init_integration
      67ad47e7 refactor
      16effa76 refactor
      a91b68df DynamicBuffer, StaticBuffer, amd_buffer_load support customized value for invalid element
      2cbabbba use int instead of index_t in kernel wrapper
      0834bc76 compiler parameter use stream
      f2ac7832 make innner product compatiable on gfx900
      4e57b30a rename
      c03045ce rename
      b2589957 update CK build script
      2c48039d fix kernel filename
      d626dccc fix enum issue
      643ebd4f tidy
      ddd49ec9 fix clang warning suppression
      4f566c62 vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast
      172036d7 add c-style pointer cast
      76f31319 tidy
      d1842890 tidy
      f885c131 tidy
      80120f0a tidy
      c3efeb5e tidy
      56fc0842 tidy
      54fba515 tidy
      e62bae7a tidy
      24c87289 add tidy
      61487e0a fix
      ae98b52a remove online compilation from CK
      cb954213 refactor
      73ca9701 Merge commit '437cc595c6e206dfebb118985b5171bbc1e29eab' into composable_kernel_init_integration_v3
      3b866461 Merge pull request #7 from ROCmSoftwarePlatform/master
      d09ea4f4 Update develop (#5)
      3d32ae94 add solver ConvIgemmFwdV6r1DlopsNchwKcyxNkhw; rename static ck source files
      
      git-subtree-dir: src/composable_kernel
      git-subtree-split: 31b40352
      
      
      
      * Tiny fix in using data type template parameters in blockwise and direct_threadwise kernel
      
      * Fix with regard to implementing GetZeroVal() in both kernel and host
      
      * Avoid convert to compType from dstDataType before writting the output value
      
      * Add half_t support to NumericLimits and make constexpr GetZeroVal() of binary operator
      
      * Add CONSTANT decorator for descriptor read buffer
      
      * Use get_thread_local_1d_id() for thread local Id
      
      * Rename GetZeroVal() to GetReductionZeroVal() in the kernels
      
      * Remove constexpr from initialized zeroVal and tiny fix in reduction_operator.hpp
      
      * Occasional tiny simplification and update in the kernel files
      
      * Update in src/reducetensor.cpp for consistent IDs passing to the kernel
      
      * Update to re-order tensor dimensions on the host, split second_call kernel wrapper files and simplify reduce_all kernel wrappers
      
      * Update to remove OpenCL tidy checking failures
      
      * Small updates in src/reducetensor.cpp
      
      * Update for better readability
      
      * Remove unused codes and not-needed template parameters in the kernel wrappers
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarJD <Jehandad.Khan@amd.com>
      dfb80c4e
  23. 05 Sep, 2021 1 commit
  24. 31 Aug, 2021 1 commit
    • ltqin's avatar
      Backward weight v4r4r2 with xdlops (#18) · 627d8ef3
      ltqin authored
      
      
      * start
      
      * modify transformat
      
      * modify device convolutiion
      
      * modify host
      
      * added host conv bwd and wrw
      
      * remove bwd, seperate wrw
      
      * clean
      
      * hacall k to zero
      
      * out log
      
      * fixed
      
      * fixed
      
      * change to (out in wei)
      
      * input hack
      
      * hack to out
      
      * format
      
      * fix by comments
      
      * change wei hacks(wei transform has not merge)
      
      * fix program once issue
      
      * fix review comment
      
      * fix vector load issue
      
      * tweak
      Co-authored-by: default avatarltqin <letaoqin@amd.com>
      Co-authored-by: default avatarJing Zhang <jizhan@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      627d8ef3