1. 26 Jul, 2023 2 commits
  2. 25 Jul, 2023 2 commits
  3. 21 Jul, 2023 3 commits
  4. 18 Jul, 2023 3 commits
  5. 17 Jul, 2023 1 commit
  6. 15 Jul, 2023 1 commit
  7. 12 Jul, 2023 1 commit
  8. 07 Jul, 2023 1 commit
  9. 06 Jul, 2023 5 commits
    • Adam Osewski's avatar
      Add basic setup for precommit (#749) (#764) · 237f9cd3
      Adam Osewski authored
      
      
      * Add basic setup for precommit
      
      * Update README.md with instructions on installing precommit hooks
      
      ---------
      Co-authored-by: default avatarIllia Silin <98187287+illsilin@users.noreply.github.com>
      Co-authored-by: default avatarBartlomiej Wroblewski <bwroblewski10@gmail.com>
      237f9cd3
    • Po Yen Chen's avatar
      Split GEMM instance library & enable pipeline v2 optimization (#783) · 850144a0
      Po Yen Chen authored
      * Move source file into sub-directories
      
      * Add missing include directive
      
      * Split DeviceGemmXdl<> fp16 instances
      
      * Fix format
      
      * Remove unnecessary CMakeLists.txt
      
      * Add macros to toggle new features
      
      * Remove debug message
      
      * Turn off GEMM v2 pipeline optimization by default
      
      * Fix format
      
      * Extract duplicated string as list
      
      * Enlarge indent in CMakeLists.txt
      850144a0
    • Qianfeng's avatar
      Batchnorm splitk single kernel (#771) · 8f5cafaf
      Qianfeng authored
      * Use dim 0 as faster dim for writing mean/var/count workspace in batchnorm multiblock method [performance]
      
      * Add CountDataType as template parameter in blockwise_welford
      
      * Add utility/get_shift.hpp
      
      * Add BatchNorm multiblock single-kernel implementation
      
      * Add smem inline assembly based implementation of gms_init/gms_barrier/gms_reset for gfx90a
      
      * Renaming in device_batchnorm_forward_impl.hpp
      
      * Tiny fix in the batchnorm_fwd profiler
      
      * Revert "Add smem inline assembly based implementation of gms_init/gms_barrier/gms_reset for gfx90a"
      
      This reverts commit d16d00919c43f10759e7b4e4d112125221ed9064.
      
      * Use the old two-kernel batchnorm multiblock method for gfx1030
      
      * Use the old two-kernel batchnorm multiblock method for gfx908
      
      * use the single-kernel batchnorm multiblock method only for gfx90a
      
      * Remove get_wave_id() from utility/get_id.hpp since it is not used
      
      * Set true for testing running mean/variance and saving mean/invvariance in the examples
      
      * Fix to copy-right words
      
      * Remove un-needed including in utility/get_id.hpp
      
      * Add comments to workgroup_synchronization.hpp
      
      * Remove un-used codes in gridwise_multiblock_batchnorm_forward.hpp
      
      * Renaming in the kernels
      
      * Remove un-used kernel file
      8f5cafaf
    • Adam Osewski's avatar
      f4dfc060
    • Bartlomiej Kocot's avatar
      2b0b6d9f
  10. 05 Jul, 2023 2 commits
  11. 30 Jun, 2023 1 commit
  12. 28 Jun, 2023 1 commit
  13. 21 Jun, 2023 2 commits
  14. 20 Jun, 2023 2 commits
  15. 19 Jun, 2023 3 commits
    • Illia Silin's avatar
      do not build gemm-gemm and conv-conv examples for gfx94* (#761) · 645eb2f2
      Illia Silin authored
      * do not build gemm-gemm and conv-conv examples for gfx94*
      
      * do not build gemm-gemm and conv-conv examples on navi
      645eb2f2
    • Rostyslav Geyyer's avatar
      FP8 enablement - add a pseudorandom number generator, add conversion methods (#708) · f0c620c4
      Rostyslav Geyyer authored
      * Add basic fp8 definitions and prn-generator
      
      * Format
      
      * Add fp8<->fp32 type_convert
      
      * Format
      
      * Split type_convert and cast_to/from_f8
      
      * Format
      
      * Minor fix
      
      * Minor fix
      
      * Move fp8 utils to a separate header
      
      * Add elementwise ops
      
      * Add fp8_convert_sr
      
      * Format
      
      * Add element op
      
      * Eliminate magic numbers
      
      * Split f8_convert_sr in host and device
      
      * Format
      
      * Add some constexpr
      
      * Add a datatype test
      
      * Format
      
      * Another format
      
      * Add fp8<->fp16 tests
      
      * Update type_converts
      
      * Format
      
      * Add fp16 casting functions
      
      * Format
      
      * Use seed as a runtime arg
      
      * Use element location for PRNG
      
      * Format
      
      * Add fp8<->fp16 to PassThrough element op
      
      * Clean up
      
      * Merge host and device implementations
      
      * Add comments on rounding modes
      
      * Remove leftover code
      
      * Put type_converts into a separate header
      
      * Put random number gen to a separate header
      
      * Rearrange f8_utils' namespaces
      
      * Refactor type_convert.hpp
      
      * Move f8_t definition
      f0c620c4
    • rocking's avatar
      Maxpool bwd (#750) · 341ad956
      rocking authored
      * Add maxpool f32 kernel and example
      
      * Revise copyright
      
      * Add device pool bwd device op
      
      * Support f16 and bf16
      
      * Add compute datatype for reference code.
      Prevent error in bf16
      
      * Fix type error
      
      * Remove layout
      
      * Fix bf16 error
      
      * Add f16 and bf16 example
      
      * Add more operations
      
      * Implement IsSupportedArgument
      
      * Add changelog
      
      * Add comment
      
      * Add comment
      
      * Remove useless header
      
      * Move initialize of workspace to the run
      
      * Move set din zero to the device operator
      
      * Save din_length_raw
      
      * Remove useless header
      
      * Calculate gridsize according to the number of CU
      
      * Calculate gridSize according to the number of CU.
      Remove useless header
      
      * Add put example
      
      * Remove useless header
      
      * Fix CI fail
      341ad956
  16. 17 Jun, 2023 1 commit
    • Qianfeng's avatar
      Padded Generic Kernel Instance (#730) · 0d911822
      Qianfeng authored
      
      
      * Add NumReduceDim template parameter to DeviceSoftmax and Softmax client API to simplify instances collecting
      
      * Move the generic kernel instance to be the first of the instance list for elementwise op of normalization
      
      * Add GetGenericInstance() interface for DeviceOperationInstanceFactory class of DeviceSoftmax
      
      * Add testing of GetGenericInstance() in client_example of Softmax
      
      * Revert "Add testing of GetGenericInstance() in client_example of Softmax"
      
      This reverts commit f629cd9a93ce38dfed4886d849f3c38d2e5379c8.
      
      * Revert "Add GetGenericInstance() interface for DeviceOperationInstanceFactory class of DeviceSoftmax"
      
      This reverts commit a9f0d000eb9fd240404112a526ef125429a351df.
      
      * Support generic kernel instance to be the first instance returned by GetInstances() for GroupNorm
      
      * Move generic kernel instance to separate tuple for elementwise op of normalization
      
      * Remove un-used files for softmax instance
      
      * Store generic kernel instance to separate tuple for softmax
      
      * Add IsSupported checking for generic instance to client example of softmax
      
      * Replace the get_device_normalize_from_mean_meansquare_instances() by the DeviceOperationInstanceFactory class for elementwise-normalization
      
      * clang-format fix
      
      * Remove int8 from softmax instances
      
      ---------
      Co-authored-by: default avatarzjing14 <zhangjing14@gmail.com>
      0d911822
  17. 16 Jun, 2023 1 commit
  18. 15 Jun, 2023 3 commits
  19. 14 Jun, 2023 2 commits
  20. 12 Jun, 2023 3 commits