Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
composable_kernel
Commits
b281dac5
Commit
b281dac5
authored
May 16, 2023
by
ltqin
Browse files
start add m loop
parent
f1a49daf
Changes
4
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
4725 additions
and
0 deletions
+4725
-0
example/32_batched_gemm_scale_softmax_gemm/CMakeLists.txt
example/32_batched_gemm_scale_softmax_gemm/CMakeLists.txt
+2
-0
example/32_batched_gemm_scale_softmax_gemm/batched_multihead_attention_backward_V2R2.cpp
...oftmax_gemm/batched_multihead_attention_backward_V2R2.cpp
+1048
-0
include/ck/tensor_operation/gpu/device/impl/device_batched_multihead_attention_backward_xdl_cshuffle_v2R2.hpp
...atched_multihead_attention_backward_xdl_cshuffle_v2R2.hpp
+1313
-0
include/ck/tensor_operation/gpu/grid/gridwise_batched_multihead_attention_backward_xdl_cshuffle_pt2v2.hpp
...tched_multihead_attention_backward_xdl_cshuffle_pt2v2.hpp
+2362
-0
No files found.
example/32_batched_gemm_scale_softmax_gemm/CMakeLists.txt
View file @
b281dac5
...
@@ -12,6 +12,8 @@ add_example_executable(example_batched_multihead_attention_backward batched_mult
...
@@ -12,6 +12,8 @@ add_example_executable(example_batched_multihead_attention_backward batched_mult
add_example_executable
(
example_grouped_multihead_attention_train grouped_multihead_attention_train.cpp
)
add_example_executable
(
example_grouped_multihead_attention_train grouped_multihead_attention_train.cpp
)
add_example_executable
(
example_batched_multihead_attention_train batched_multihead_attention_train.cpp
)
add_example_executable
(
example_batched_multihead_attention_train batched_multihead_attention_train.cpp
)
add_example_executable
(
example_batched_multihead_attention_backward_V1R2 batched_multihead_attention_backward_V2R2.cpp
)
add_custom_target
(
example_gemm_scale_softmax_gemm
)
add_custom_target
(
example_gemm_scale_softmax_gemm
)
add_dependencies
(
example_gemm_scale_softmax_gemm example_batched_gemm_scale_softmax_gemm_xdl_fp16
)
add_dependencies
(
example_gemm_scale_softmax_gemm example_batched_gemm_scale_softmax_gemm_xdl_fp16
)
add_dependencies
(
example_gemm_scale_softmax_gemm example_batched_gemm_scale_softmax_gemm_xdl_bf16
)
add_dependencies
(
example_gemm_scale_softmax_gemm example_batched_gemm_scale_softmax_gemm_xdl_bf16
)
...
...
example/32_batched_gemm_scale_softmax_gemm/batched_multihead_attention_backward_V2R2.cpp
0 → 100644
View file @
b281dac5
This diff is collapsed.
Click to expand it.
include/ck/tensor_operation/gpu/device/impl/device_batched_multihead_attention_backward_xdl_cshuffle_v2R2.hpp
0 → 100644
View file @
b281dac5
This diff is collapsed.
Click to expand it.
include/ck/tensor_operation/gpu/grid/gridwise_batched_multihead_attention_backward_xdl_cshuffle_pt2v2.hpp
0 → 100644
View file @
b281dac5
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment