Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
wqshmzh
ktransformers
Commits
92399283
"vscode:/vscode.git/clone" did not exist on "d790bf99166d5a97d9d01c78b0658706c28580b5"
Unverified
Commit
92399283
authored
Feb 15, 2025
by
Atream
Committed by
GitHub
Feb 15, 2025
Browse files
Update attention.py
parent
d90749d3
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
ktransformers/operators/attention.py
ktransformers/operators/attention.py
+2
-2
No files found.
ktransformers/operators/attention.py
View file @
92399283
...
@@ -262,7 +262,7 @@ class KDeepseekV2Attention(BaseInjectedModule, DeepseekV2Attention):
...
@@ -262,7 +262,7 @@ class KDeepseekV2Attention(BaseInjectedModule, DeepseekV2Attention):
"""
"""
# flash attn doesn't support head_dim bigger than 256
# flash attn doesn't support head_dim bigger than 256
# use
vLLM
triton attention kernel for MQA
# use triton attention kernel
adapted from vLLM and SGLang
for MQA
decode_attention_fwd_grouped
(
query_states
,
compressed_kv_with_k_pe
,
compressed_kv
,
attn_output
,
decode_attention_fwd_grouped
(
query_states
,
compressed_kv_with_k_pe
,
compressed_kv
,
attn_output
,
page_table
,
page_table
,
position_ids
.
squeeze
(
0
).
to
(
torch
.
int32
),
attn_logits
,
position_ids
.
squeeze
(
0
).
to
(
torch
.
int32
),
attn_logits
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment