Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
898dd4bb
"examples/vscode:/vscode.git/clone" did not exist on "d95c3513119c3b383411e900e7769bb92e0ca3ac"
Commit
898dd4bb
authored
Jul 13, 2024
by
Tri Dao
Browse files
Pass seqused_k to _flash_attn_varlen_forward
parent
7ef24848
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
5 deletions
+6
-5
flash_attn/flash_attn_interface.py
flash_attn/flash_attn_interface.py
+6
-5
No files found.
flash_attn/flash_attn_interface.py
View file @
898dd4bb
...
...
@@ -77,12 +77,13 @@ def _flash_attn_varlen_forward(
dropout_p
,
softmax_scale
,
causal
,
window_size
,
softcap
,
alibi_slopes
,
return_softmax
,
window_size
=
(
-
1
,
-
1
)
,
softcap
=
0.0
,
alibi_slopes
=
None
,
return_softmax
=
False
,
block_table
=
None
,
leftpad_k
=
None
,
seqused_k
=
None
,
):
maybe_contiguous
=
lambda
x
:
x
.
contiguous
()
if
x
.
stride
(
-
1
)
!=
1
else
x
q
,
k
,
v
=
[
maybe_contiguous
(
x
)
for
x
in
(
q
,
k
,
v
)]
...
...
@@ -93,7 +94,7 @@ def _flash_attn_varlen_forward(
None
,
cu_seqlens_q
,
cu_seqlens_k
,
None
,
seqused_k
,
leftpad_k
,
block_table
,
alibi_slopes
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment