1. 01 Jul, 2022 1 commit
  2. 23 Jun, 2022 1 commit
    • Adam Osewski's avatar
      Testing all fwd convolution specializations. (#259) · a2edd7d8
      Adam Osewski authored
      
      
      * UniforFill with integer values.
      
      * Log tested instance type string.
      
      * Add UT for all convolution specializations.
      
      * debugging conv
      
      * Fix dangling reference bug.
      
      * Small refinements.
      
      * Fix call to error checking function.
      
      * Small refinements to tests.
      
      * Configure error tolerance
      * Change problem size.
      * Remove OddC case from types that do not support it.
      
      * Add helper traits for AccumulatorDataType.
      
      * Print first 5 errs in check_err for integral types.
      
      * Rename FillUniform to FillUniformDistribution
      
      * Refactor
      
      * Do not use typed tests.
      * Instead use plain fixture class with templatized member functions.
      * Initialize tensors with integer values.
      
      * Refine test instances.
      
      * Properly set accumulator data type.
      * Add another "big" instance.
      
      * Refactor convolution tests.
      
      * Revert "debugging conv"
      
      This reverts commit b109516455631ff8fd6dce99cf7c14bf8e323ebb.
      
      * Add pragma once + format + small refinement.
      
      * Fix some unwanted changes.
      
      * Clang-format
      
      * Fix profile_convnd to use renamed tensor initializer.
      
      * Add instances for ConvFWDND kernel case 2D
      
      * Helpers to get ConvNDFwd 2D instances.
      
      * Refactoring.
      
      * Remove "small block" instance as it was generating compiler errors.
      * Remove default template parameters values.
      
      * Refine and fix test.
      
      * Fix problem with default template parameter types.
      * Adjust error thresholds for floating point values test.
      * Use integer values initialization for instances test.
      * Add tests for ConvNDFwd 2D case.
      
      * Remove AccumulatorDataType type trait.
      
      * Update unit-tests.
      
      * Remove operator<< overload.
      
      * Unlock conv1d/3d nd fwd instances.
      
      * Enable skipping calculating reference using flag.
      
      * Fix number of channels for first ResNet50 layer.
      
      * Clang-format.
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      a2edd7d8
  3. 21 Jun, 2022 1 commit
  4. 16 Jun, 2022 1 commit
    • Illia Silin's avatar
      Use new github credentials (#278) · fb9b6b1e
      Illia Silin authored
      * use pre-built docker instead of building a new one
      
      * try docker.image.pull
      
      * change syntax in docker.image()
      
      * add 30 min timeout
      
      * increase timeout to 3 hours
      
      * move performance tests to first stage for testing
      
      * set image variable to the new container name
      
      * update image name
      
      * check available images
      
      * check available images in both places
      
      * try different image name
      
      * use image ID to refer to image
      
      * run performance on gfx90a
      
      * fix the gpu_arch labeling, add parameter
      
      * move env vars out of stages
      
      * add stand-alone performance script, MI200 tests, CU numbers
      
      * dos2unix for run_perf_tests.sh
      
      * try the new git credentials
      
      * use env var for git credentials
      fb9b6b1e
  5. 10 Jun, 2022 1 commit
    • Illia Silin's avatar
      Add performance tests on MI200 in CI, reporting number of CUs, add stand-alone perf test. (#277) · 1ced00a5
      Illia Silin authored
      * use pre-built docker instead of building a new one
      
      * try docker.image.pull
      
      * change syntax in docker.image()
      
      * add 30 min timeout
      
      * increase timeout to 3 hours
      
      * move performance tests to first stage for testing
      
      * set image variable to the new container name
      
      * update image name
      
      * check available images
      
      * check available images in both places
      
      * try different image name
      
      * use image ID to refer to image
      
      * run performance on gfx90a
      
      * fix the gpu_arch labeling, add parameter
      
      * move env vars out of stages
      
      * add stand-alone performance script, MI200 tests, CU numbers
      1ced00a5
  6. 02 Jun, 2022 1 commit
    • Illia Silin's avatar
      Adding Resnet50 test to Performance tests (#268) · 1677cf70
      Illia Silin authored
      * add resnet50 test to performance tests
      
      * add blanks before gpu_arch in log files
      
      * add resnet50 test with N=4 and process its results
      
      * add ROCM and HIP versions to test tables
      
      * uncomment the sql queries
      
      * fix script syntax in jenkinsfile
      1677cf70
  7. 24 May, 2022 2 commits
    • Qianfeng's avatar
      Overhaul to Reducton and its dependants (#237) · 63eee2d9
      Qianfeng authored
      * Tiny fix in dynamic_buffer.hpp to support vectorized AtomicAdd for double type
      
      * Update to host layer and host reduction
      
      * Merge and remove reduction kernels
      
      * Merge and remove reduction device interfaces and update pooling device interface
      
      * Merge and remove useless reduction device instances
      
      * Update to reduction profiler and reduction ctests
      
      * Update to reduction and pooling examples and add one reduction example
      
      * Change to reduction examples to let them testable by ctest
      
      * Add explicit pass checking for reduction and pooling examples
      
      * Explicit assignment of tensor shapes in example reduce_blockwise_two_call
      
      * Use atomic_add to repace atomicAdd and add atomic_add for double type
      
      * Add reduce ctest support for double data type
      
      * Replace to_int_vector() by using c++ std::vector::assign()
      
      * Keep DeviceReduceThreadWise separated from DeviceReduceBlockWise
      
      * Merge DeviceReduceBlockWise and DeviceReduceMultiBlockAtomicAdd into DeviceReduceMultiBlock
      
      * Add GetAtomicOperationZeroValue() support for AtomicMax
      
      * Tiny change to reduce example README.md
      
      * Fix some tiny issues due to branch merging
      
      * Revoke previous change in dynamic_buffer.hpp and add atomic_add for double2_t
      
      * Add reduce multiblock_atomic_add instances for fp64 to verify vectorized atomic_add on fp64
      
      * Renaming
      
      * Clean the header includings in device_reduce instances header files
      63eee2d9
    • Illia Silin's avatar
      Add performance tests as a stage of CI. (#247) · 1085794d
      Illia Silin authored
      * modify ckProfiler_gemm output
      
      * fix syntax
      
      * change ckProfiler output and return 0
      
      * fix syntax
      
      * output datatype
      
      * fix syntax
      
      * output datatype in another way
      
      * fix syntax
      
      * fix syntax
      
      * test return values of ckProfiler
      
      * add layout info and tests, make sure ckprofiler returns 0
      
      * fix syntax
      
      * change layout output
      
      * fix syntax
      
      * fix syntax again
      
      * update script to process perf results
      
      * rearrange jenkins stages
      
      * fix typo
      
      * add python packages to Docker file
      
      * adding setuptools-rust package
      
      * modify parsing for new test parameters
      
      * test db credentials on jenkins
      
      * fix syntax
      
      * update python script to handle incomplete lines
      
      * ungrade python to 3.8 and write the gemm_params table
      
      * add sqlalchemy package to docker
      
      * move perf data processing to master node
      
      * move the master node inside a steps region
      
      * add new stage for result processing
      
      * move results processing to separate stage
      
      * reduce number of tests to speedup debugging
      
      * pass config to processPerfResults stage
      
      * run script on master in a docker container
      
      * replace show_node_info
      
      * try loading docker on master node again
      
      * use ansible node instead of master
      
      * get rid of pymysql package
      
      * try ssh connection using paramiko
      
      * put back pymysql
      
      * put the perf data processing back on the gpu node
      
      * put back artifact definition
      
      * archive the perf_log before parsing
      
      * clean up jenkinsfile, fix parsing
      
      * fix typo
      
      * enable all perf tests
      
      * put all stages in original order, finalize script
      
      * fix gpu_arch version
      
      * update parsing script
      
      * remove obsolete file causing merge conflict
      1085794d
  8. 08 May, 2022 1 commit
    • Illia Silin's avatar
      Add Benchmark test into CI (#226) · a3c910ac
      Illia Silin authored
      
      
      * add performance test to jenkins pipeline
      
      * fix typo
      
      * fix the syntax in conv_fwd_util.cpp
      
      * fix the error message syntax spacing
      
      * fix the error message syntax spacing again
      
      * run profile_gemm and archive results
      
      * fix typo
      
      * try to figure out the paths
      
      * try to figure out the paths one more time
      
      * skip the copying step
      
      * build ckProfiler release only once
      
      * change directory using dir
      
      * fix dir syntax
      
      * change the gemm parameters
      
      * do not pipe script output to file
      
      * try running ckProfiler directly
      
      * fix typo
      
      * use set +e
      
      * run profile_gemm.sh || true
      
      * run multiple gemms and parse results
      
      * fix typo in jenkinsfile
      
      * fix syntax
      
      * add new gemm sizes, update scripts
      
      * put all jenkins steps in original order
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarChao Liu <lc.roy86@gmail.com>
      a3c910ac
  9. 22 Apr, 2022 1 commit
  10. 15 Apr, 2022 1 commit
    • Illia Silin's avatar
      Compile CK for all targets (#188) · 4221505d
      Illia Silin authored
      
      
      * compile ck for all targets
      
      * update the target criteria
      
      * change the target condition
      
      * fixed some typos
      
      * fixed missed file
      
      * revert changes in README
      
      * revert device_conv3d_fwd_xdl_...
      
      * update device_conv3d_fwd_xdl_...
      
      * update device_batched_gemm_reduce...
      
      * test the unused arguments fix
      
      * test the warning suppression
      
      * try suppress warnings in device_batched_gemm_reduce_xdl...
      
      * fix the last warnings
      
      * replace UNUSED with std::ignore
      
      * fix a typo
      
      * replaced std::ignore with ignore
      
      * add igonre header to common_header
      
      * refactor atomicAdd
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      4221505d
  11. 31 Mar, 2022 1 commit
    • Chao Liu's avatar
      Compile for gfx908 and gfx90a (#130) · cd167e49
      Chao Liu authored
      * adding compilation for multiple targets
      
      * fix build
      
      * clean
      
      * update Jekinsfile
      
      * update readme
      
      * update Jenkins
      
      * use ck::half_t instead of ushort for bf16
      
      * rename enum classes
      
      * clean
      
      * rename
      
      * clean
      cd167e49
  12. 23 Mar, 2022 1 commit
    • Adam Osewski's avatar
      Unified conv3D API + support for all data types. (#133) · f91579aa
      Adam Osewski authored
      
      
      * Convolution ND
      
      * Code unification across dimensions for generating tensor descriptors.
      * Example
      * Instances
      
      * Move convnd f32 instance file to comply with repo structure.
      
      * Conv 1D tensor layouts.
      
      * Formatting and use ReferenceConv
      
      * Reference ConvFwd supporting 1D and 2D convolution.
      
      * Debug printing TensorLayout name.
      
      * Conv fwd 1D instance f32
      
      * Refactor conv ND example.
      
      Needed to support various conv dimensio.
      
      Needed to support various conv dimensions
      
      * Rename conv nd example director to prevent conflicts.
      
      * Refactor some common utility to single file.
      
      Plus some tests.
      
      * Refactor GetHostTensorDescriptor + UT.
      
      * Add 1D test case.
      
      * Test reference convolution 1d/2d
      
      * Remove some leftovers.
      
      * Fix convolution example error for 1D
      
      * Refactor test check errors utility function.
      
      * Test Conv2D Fwd XDL
      
      * More UT for 1D case.
      
      * Parameterize input & weight initializers.
      
      * Rename example to prevent conflicts.
      
      * Split convnd instance into separate files for 1d/2d
      
      * Address review comments.
      
      * Fix data type for flops/gbytes calculations.
      
      * Assign example number 11.
      
      * 3D cases for convolution utility functions.
      
      * 3D reference convolution.
      
      * Add support for 3D convolution.
      
      * Check for inputs bigger than  2GB.
      
      * Formatting
      
      * Support for bf16/f16/f32/i8 - conv instances + UT.
      
      * Use check_err from test_util.hpp.
      
      * Split convnd test into separate files for each dim.
      
      * Fix data generation and use proper instances.
      
      * Formatting
      
      * Skip tensor initialization if not necessary.
      
      * Fix CMakefiles.
      
      * Remove redundant conv2d_fwd test.
      
      * Lower problem size for conv3D UT.
      
      * 3D case for convnd example.
      
      * Remove leftovers after merge.
      
      * Add Conv Specialization string to GetTypeString
      
      * Skip instance causing numerical errors.
      
      * Small fixes.
      
      * Remove redundant includes.
      
      * Fix namespace name error.
      
      * Script for automatic testing and logging convolution fwd UTs
      
      * Comment out numactl cmd.
      
      * Refine weights initalization and relax rtol for fp16
      
      * Fix weights initialization for int8.
      
      * Add type_convert when store output in ref conv 1D.
      
      * Get back old conv2d_fwd_xdl operation.
      
      * Silence conv debug print.
      
      * format
      
      * clean
      
      * clean
      
      * Fix merge.
      
      * Fix namespace for check_err
      Co-authored-by: default avatarAdam Osewski <aosewski@amd.com>
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      f91579aa
  13. 22 Mar, 2022 1 commit
    • Qianfeng's avatar
      Reduction for int8 and bfloat16 (#125) · 9a8ee8a3
      Qianfeng authored
      
      
      * Use thread cluster descriptor and explicit M_K 2d descriptor to simply Blockwise Reduction
      
      * Change by replacing ReduceDims by NumReduceDims as Device Reduce interface template parameter
      
      * Rename the folder name for the pool2d and reduce examples
      
      * Update to reduction test scripts
      
      * Add Readme for pool2d_fwd and reduce_blockwise examples
      
      * Add support for int8_t reduction (ADD/AVG, MIN/MAX/AMAX)
      
      * Tiny fix in reduce profiler and tiny update in reduce testing scripts
      
      * Tiny fix in testing script profile_reduce_no_index.sh
      
      * Tiny fix in testing script profile_reduce_no_index.sh
      
      * Add support for bfp16 reduction (using bhalf_t = ushort)
      
      * Tiny fix in amd_buffer_addressing.hpp
      
      * Tiny change in script/profile_reduce_with_index.sh
      
      * Use AccDataType for Beta value and use element_wise::PassThrough
      
      * Use type_convert for type converting in host layer reduction
      
      * Renaming and refining in Reduction profiler/device layer/examples
      
      * Renaming and refining in Reduction profiler/device layer/examples
      
      * Renaming all NumReduceDims to NumReduceDim
      
      * Fix the leaked type_convert in ThreadwiseTensorSliceTransfer_v2
      
      * Update to testing scripts to add bf16 support
      
      * added more static_assert
      
      * Remove buggy tunable configurations defined in device_reduce_instance_xxx.hpp
      
      * Add static_assert to give compile-time warning for incorrect thread slice-size/vector-size configurations
      
      * minor change
      
      * Refine and fix (in GetWorkspaceSizeInBytes of MultiBlockPartialReduce) to make int8 completely pass
      
      * Tiny renaming in gridwise_2d_reduction_multiblock_partial_reduce.hpp
      
      * Tiny fix in script/profile_reduce_no_index.sh
      
      * Refine in DeviceReduce layer with regard to using NumInvariantDim/NumReduceDim or InvariantDims/ReduceDims
      
      * Generic renaming in host reduction and DeviceReduce layer
      
      * Add support for 4-d all dimension reduction in the profiler and add_device_reduce_xxx instances
      
      * Use multi-thread and simplification for host Reduction implementation
      
      * Add ctest for reduction
      
      * Update to clarify the using of data init method in produce_reduce/example_reduce/test_reduce/
      
      * Update to the reduce CTest executables to enable default testing behavior when no command argument
      
      * Renaming
      Co-authored-by: default avatarJianfeng yan <jfyan008@gmail.com>
      9a8ee8a3
  14. 11 Mar, 2022 1 commit
  15. 10 Mar, 2022 1 commit
    • Qianfeng's avatar
      Pr82 followup (#115) · 827301d9
      Qianfeng authored
      * Use thread cluster descriptor and explicit M_K 2d descriptor to simply Blockwise Reduction
      
      * Change by replacing ReduceDims by NumReduceDims as Device Reduce interface template parameter
      
      * Rename the folder name for the pool2d and reduce examples
      
      * Update to reduction test scripts
      
      * Add Readme for pool2d_fwd and reduce_blockwise examples
      
      * Tiny fix in reduce profiler and tiny update in reduce testing scripts
      
      * Tiny fix in testing script profile_reduce_no_index.sh
      
      * Tiny change in script/profile_reduce_with_index.sh
      
      * Renaming and refining in Reduction profiler/device layer/examples
      
      * Renaming and refining in Reduction profiler/device layer/examples
      
      * Renaming all NumReduceDims to NumReduceDim
      827301d9
  16. 05 Mar, 2022 1 commit
    • Qianfeng's avatar
      Reduction in Composable Kernel (#82) · e17c0d80
      Qianfeng authored
      
      
      * Initial adding of generic reduction
      
      * Initial adding of generic reduction ...
      
      * Updates to make compiling done
      
      * clang-format all files
      
      * clang-format some files again
      
      * Renaming in profiler/include/profile_reduce.hpp
      
      * Updates and make BlockWise cases passed
      
      * Updates and make ThreadWise and MultiBlockTwoCall cases passed
      
      * Remove the support for MUL and NORM1 reduceOp from the profiler and the device instances
      
      * Change to replace the dim0_max_vector_size/dim1_max_vector_size template argument in the device reduce classes
      
      * format
      
      * adding pooling
      
      * added max and average pooling
      
      * comment out cout and kernel timing
      
      * Tiny simplification in profiler/reduce_profiler.cpp
      
      * Add example for reduce_blockwise
      
      * Tiny updates
      
      * Change to pass the ElementWiseOp from device layer to kernel
      
      * Fix the vectorDim and vectorSize in Device layer
      
      * Enable vector load on both dim0 and dim1 for Threadwise method
      
      * Tiny updates
      
      * Change to let the user to pass the preUnaryOp and posUnaryOp
      
      * Make pooling example work
      
      * split device_reduce_instance into two libraries
      
      * Tiny update
      
      * Replace nanPropaOpt enum by boolean propagate_nan
      
      * Simplification in DeviceReduce layer codes
      
      * update build
      
      * Change to clarify the difference between ck::half_t and half_float::half
      
      * Renaming in all the reduction codes
      
      * Add VectorSize as template parameter for device layer
      
      * Add BetaIsZero as kernel template and as AccDataType for alpha
      
      * print
      
      * Small updates for pooling
      
      * Updates for host_generic_reduction for reference
      
      * Update to make AVG pooling pass
      
      * Update to make MAX pooling with indices output pass
      
      * fix
      
      * add OutDst vector store to threadwise reduction and pooling
      
      * tweak
      
      * turn off check_indices that caused build issue
      
      * refactor pooling
      
      * clean up
      
      * turn off check_indices for building issue for php-compiler
      
      * add more tile size for odd C
      
      * tweak conv for odd C
      
      * update script
      
      * clean up elementwise op
      
      * add hack in reduction_operator.hpp to avoid compile error. To fix it, need to use element_wise_op in reduction op
      
      * Add OutVectorSize as device and kernel tunable, also update to Elementwise Operations
      
      * Move reduce operator mapping to host layer file reduction_operator_mapping.hpp from reduction_operator.hpp
      
      * Change to the unary operators
      
      * Move the definitions of unary operations to element_wise_operation.hpp
      
      * re-org files
      
      * Refine in device interfaces and multiblock kernels
      
      * Split the reduction configurations into instances for specific methods
      
      * Update in getTypeString() of device pool2d
      
      * Renaming in host and kernel
      
      * Tiny update in profiler/src/profiler.cpp
      
      * Uncomment in device_operation/CMakeLists.txt to enable the building of all operations
      
      * Make check_indices a templated function to remove some linking issue
      
      * Renaming in the profiler reduce module
      
      * Add support for double Reduction (but disable MultiblockAtomicAdd for double)
      
      * Tiny correction of literal string
      
      * Rename DevicePoolFwd to DevicePool2dFwd
      
      * Split device_reduce_instance_xxx.cpp files according to the data types to speed up compiling
      
      * Add comments for lists of configurations, lists of instances and references of add_reduce_instances_xxx
      
      * Remove un-used header file gridwise_generic_reduction_wrapper_common.hpp
      
      * Renaming and refining in the Reduction codes
      
      * Tiny change in the unary operators
      
      * Renaming symbols and files
      
      * Renaming symbols in the kernels
      
      * Move kernel kernel_set_buffer_value to separate file
      
      * Add IndexDataType template parameter for kernels and use int32_t as index data type in device layer
      
      * Tiny update in the kernels
      
      * Remove definition of sqrtf()/isnan()/abs() for half_t due to some ADL issue
      
      * Simplify a helper function in device layer
      
      * Tiny adjustment in testing data initialization
      
      * Renaming in kernel/device/host
      
      * Add two testing scripts for reduction
      
      * Refine the Unary operators in element_wise_operation.hpp
      
      * Update in the reduce profiler module
      
      * Update to the reduction testing scripts
      
      * reduce compile parallelism
      
      * change CI docker to rocm5.0
      
      * remove unused variables
      
      * fix build
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      e17c0d80
  17. 07 Feb, 2022 1 commit
    • Chao Liu's avatar
      GEMM+Bias+ReLU+Add (#76) · 823657ed
      Chao Liu authored
      * tweak conv for odd C
      
      * update script
      
      * clean up elementwise op
      
      * fix build
      
      * clean up
      
      * added example for gemm+bias+relu+add
      
      * added example for gemm+bias+relu
      
      * add profiler for gemm_s_shuffle; re-org files
      
      * add profiler
      
      * fix build
      
      * clean up
      
      * clean up
      
      * clean up
      
      * fix build
      823657ed
  18. 21 Jan, 2022 1 commit
    • rocking5566's avatar
      Add gemm_shuffle host api (#71) · 4d40b197
      rocking5566 authored
      * [What]
      1. Add DeviceGemmXdl_C_Shuffle
      2. Revise example of gemm_xdl
      [Why] Prepare to add shuffle version of D = alpha * (A * B) + beta * C
      [How] Imitate DeviceGemmXdl and device_conv2d_fwd_xdl_c_shuffle_nhwc_kyxc_nhwk.hpp
      4d40b197
  19. 24 Nov, 2021 1 commit
  20. 15 Nov, 2021 1 commit
    • Chao Liu's avatar
      FP16 data in-register transpose (#41) · b491ebf3
      Chao Liu authored
      * start fixing 16bit data packing
      
      * adding StaticTensor
      
      * adding StaticTensor
      
      * adding StaticTensor
      
      * add missing constexpr
      
      * adding static tensor
      
      * adding static tensor
      
      * adding transpose
      
      * add inline asm for transpose 2x2 of half_t
      
      * add general transpose_vectors(), but have unnecessary register initialization using v_mov
      
      * fix unnecessary register initialization in transpose_vector by using more pass-by-reference
      
      * add hardcoded logic for NHWC wrw
      
      * improve asm for v_pack
      
      * make ThreadwiseTensorSliceTransfer_v3r2 support any tensor
      
      * tweak
      
      * reorganize file
      b491ebf3
  21. 14 Nov, 2021 1 commit
    • Chao Liu's avatar
      ckProfiler and device-level XDL GEMM operator (#48) · e823d518
      Chao Liu authored
      * add DeviceGemmXdl
      
      * update script
      
      * fix naming issue
      
      * fix comment
      
      * output HostTensorDescriptor
      
      * rename
      
      * padded GEMM for fwd v4r4r4 nhwc
      
      * refactor
      
      * refactor
      
      * refactor
      
      * adding ckProfiler
      
      * adding ckProfiler
      
      * refactor
      
      * fix tuning parameter bug
      
      * add more gemm instances
      
      * add more fp16 GEMM instances
      
      * fix profiler driver
      
      * fix bug in tuning parameter
      
      * add fp32 gemm instances
      
      * small fix
      
      * refactor
      
      * rename
      
      * refactor gemm profiler; adding DeviceConv and conv profiler
      
      * refactor
      
      * fix
      
      * add conv profiler
      
      * refactor
      
      * adding more GEMM and Conv instance
      
      * Create README.md
      
      Add build instruction for ckProfiler
      
      * Create README.md
      
      Add Readme for gemm_xdl example
      
      * Update README.md
      
      Remove build instruction from top most folder
      
      * Update README.md
      
      * clean up
      e823d518
  22. 06 Oct, 2021 1 commit
    • Chao Liu's avatar
      Tweak GEMM kernel (#38) · b3e8d57d
      Chao Liu authored
      * add parameters
      
      * tweak gemm
      
      * tweak
      
      * update conv
      
      * update script
      
      * adding bwd 1x1
      
      * update script
      
      * adding 1x1 bwd
      
      * debugging bwd 1x1 failure
      
      * update script
      
      * update script
      
      * test
      
      * test v100
      
      * clean up
      b3e8d57d
  23. 05 Sep, 2021 1 commit
    • Chao Liu's avatar
      GEMM driver and kernel (#29) · 19613902
      Chao Liu authored
      * add gemm driver
      
      * tweak
      
      * add gemm kernel: mk_kn_mn and km_kn_mn
      
      * tweak
      
      * add GEMM km_nk_mn
      
      * fix comment
      19613902
  24. 19 Aug, 2021 1 commit
    • Chao Liu's avatar
      Composable kernel init integration v3 (#1097) · 6fe3627a
      Chao Liu authored
      * Squashed 'src/composable_kernel/' content from commit f6edda61
      
      git-subtree-dir: src/composable_kernel
      git-subtree-split: f6edda61
      
      * add solver ConvIgemmFwdV6r1DlopsNchwKcyxNkhw; rename static ck source files
      
      * Squashed 'src/composable_kernel/' changes from f6edda61..5781adf5
      
      5781adf5 Update develop (#5) (#6)
      97e6d514 Merge pull request #4 from ROCmSoftwarePlatform/separate_online_compile
      7b1ec41e refactor
      49c33aae refactor
      54b3e73d rename
      
      git-subtree-dir: src/composable_kernel
      git-subtree-split: 5781adf5
      
      
      
      * fix
      
      * refactor
      
      * remove online compilation from CK
      
      * refactor
      
      * fix
      
      * add ctest
      
      * add c-style pointer cast
      
      * vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast
      
      * fix clang warning suppression
      
      * tidy
      
      * suppress cppcheck
      
      * fix enum issue
      
      * revert chagnes to hip build
      
      * fix kernel filename
      
      * update CK build script
      
      * rename
      
      * rename
      
      * make innner product compatiable on gfx900
      
      * Update src/include/miopen/solver/ck_utility_common.hpp
      Co-authored-by: default avatarJD <Jehandad.Khan@amd.com>
      
      * compiler parameter use stream
      
      * use int instead of index_t in kernel wrapper
      
      * DynamicBuffer, StaticBuffer, amd_buffer_load support customized value for invalid element
      
      * refactor
      
      * refactor
      
      * change cmakelist
      
      * change ck common utility
      
      * fix
      Co-authored-by: default avatarJD <Jehandad.Khan@amd.com>
      6fe3627a
  25. 10 Aug, 2021 1 commit
  26. 27 Jul, 2021 1 commit
    • Chao Liu's avatar
      [MIOpen Downstream] Initial MIOpen integration (#52) · f63a23ac
      Chao Liu authored
      * update online kernel wrapper bundle all descriptors in a tuple
      
      * change __CONSTANT__ to CONSTANT
      
      * rename
      
      * adding tuning
      
      * added IsValidCompileParameter
      
      * reorginze
      
      * adding tunable for fp16 and int8
      
      * fix kernel compile warning and bug fixes
      
      * suppress warning about cast CONSTANT (address space 4) pointer
      
      * fix building issue
      f63a23ac
  27. 18 Jul, 2021 1 commit
  28. 08 Jul, 2021 1 commit
  29. 01 Jul, 2021 1 commit
    • zjing14's avatar
      xdlops_v4r4_fwd fp32/fp16 (#34) · 3835318c
      zjing14 authored
      
      
      * create files for xdlops
      
      * working on blockwise_gemm_xdlops
      
      * add KReduction
      
      * add m/n repeats
      
      * add 2x2 pipeline
      
      * added 128x128 wavegemm
      
      * use StaticBuffer of vector_type
      
      * break vector type to blk_size
      
      * add kpack into xldops_gemm and blockwise_gemm
      
      * abroadcast only
      
      * add fp32 mfma instructions
      
      * adding fp16 mfma
      
      * pack half4_t
      
      * rename kperwave to kpack
      
      * add 32x32x8fp16
      
      * add fp16 mfma
      
      * clean code
      
      * clean code
      
      * V4r4 xdlops kpack (#35)
      
      * add kpack with incorrect results
      
      * bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2
      
      * add 1x1 kernel
      
      * add gridwise_gemm_v2 - single_buffer
      
      * enabled dwordx4 for fp16
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      
      * refactor fwd-v4r4-xdlops
      
      * add v4r4-nhwc-xdlop
      
      * improve some perf of nhwc and nchw by tuning parameters, and change scheuduling in gridwise-gemm loop
      
      * tweak scheduling in gridwise gemm
      
      * add v4r3 with a single output copy
      
      * init commit: output with slice win
      
      * adding sliceWin
      
      * add multiple repeats pattern
      
      * starting adding bwd-v4r1-xdlops
      
      * use tuple as SrcBuffer
      
      * adding bwd-data v4r1 nhwc xdlops
      
      * fix bug in make_dynamic_naive_tensor_descriptor_aligned_v2()
      
      * fix bug in host bwd-data conv
      
      * initial implementation of bwd-data v4r1 nhwc xdlops
      
      * add launch bound flags
      
      * enable launch bound
      
      * add m/nrepeat=4
      
      * tweak bwd-data v4r1 nhwc xdlops
      
      * added bwd-data v4r1 nhwc xlops with output A and weight B
      
      * add fwd-v4r4 nhwc xdlops, A input, B weight, C output
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      3835318c
  30. 24 Jun, 2021 1 commit
    • Qianfeng's avatar
      Add online compilation for dynamic kernels (#37) · 1685048a
      Qianfeng authored
      
      
      * Add online-compiling facility
      
      * Synchronize from fwd-v4r5 and implement host interfaces to call conv-fwd v4r4/v4r5 using on-line compiling method
      
      * Tiny adjustment to time reporting
      
      * Use object assignment to replace explicit bytes copying in the first kernel of v4r4/v4r5
      
      * Use single thread to assign descriptor object to device memory
      
      * Adjust to the workload assignment of the two kernels of v4r4 (experimental)
      
      * Revert "Adjust to the workload assignment of the two kernels of v4r4 (experimental)"
      
      This reverts commit eb38461456bb0c82b6c0d32cdd616e181907e20c.
      
      * Update to make constexpr for generating descriptor types in kernel 2 of dynamic conv-fwd v4r4
      
      * Update to dynamic conv-fwd v4r4 online-compiling
      
      * Update to dynamic conv-fwd v4r5 online-compiling (result not accurate)
      
      * Tiny update to driver/CMakeLists.txt
      
      * clang-format
      
      * Tiny comments change
      
      * Add env OLC_DUMP_SAVE_TMP_DIR to support saving of temperary dir
      
      * Fwd v4r5 olc perf (#39)
      
      * added hip-clang flags that fix perf issue of online compilation
      
      * fix bug for olc fwd-v4r5-nchw
      
      * Move constexpr and type reference statements out of the function body in conv-fwd v4r4/v4r5 kernel wrapper
      
      * Remove printing in hip_build_utils.cpp
      
      * Update to root CMakeLists.txt
      
      * Revert "Move constexpr and type reference statements out of the function body in conv-fwd v4r4/v4r5 kernel wrapper"
      
      This reverts commit 3d2c5d8ecdd8298b72d127110500ed5b38d9835c.
      Co-authored-by: default avatarChao Liu <chao.liu2@amd.com>
      Co-authored-by: default avatarChao Liu <lc.roy86@gmail.com>
      Co-authored-by: default avatarroot <root@dc-smc-18.amd.com>
      1685048a
  31. 10 Jun, 2021 1 commit
  32. 12 May, 2021 1 commit
  33. 11 May, 2021 1 commit
  34. 25 Mar, 2021 1 commit
  35. 24 Jun, 2020 1 commit
  36. 20 Jan, 2020 1 commit
    • Chao Liu's avatar
      Added bwd data v3r1 v4r1, tweaking v1 (#10) · c5da0377
      Chao Liu authored
      * Added bwd data v3r1: breaking down compute into a series of load balanced GEMM, and launch in a single kernel
      * Added bwd data v4r1: like v3r1, but launch GEMMs in multiple kernels
      * Tweaked v1r1  and v1r2 (atomic) on AMD GPU
      c5da0377
  37. 03 Dec, 2019 1 commit
    • Chao Liu's avatar
      backward data (#7) · 8f5f6496
      Chao Liu authored
      * enabled atomic add in tensor copy
      * added gridwise GEMM
      * added backward data conv using GEMM + atomic
      * added backward data conv using GEMM, no atomic
      8f5f6496
  38. 11 Oct, 2019 1 commit
  39. 17 Sep, 2019 1 commit