Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
72e27c63
Commit
72e27c63
authored
Jul 10, 2024
by
Tri Dao
Browse files
Fix typo with softcapping
parent
3d41db3e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
flash_attn/flash_attn_interface.py
flash_attn/flash_attn_interface.py
+2
-2
No files found.
flash_attn/flash_attn_interface.py
View file @
72e27c63
...
@@ -721,7 +721,7 @@ def flash_attn_qkvpacked_func(
...
@@ -721,7 +721,7 @@ def flash_attn_qkvpacked_func(
softmax_scale
,
softmax_scale
,
causal
,
causal
,
window_size
,
window_size
,
softcap
ping
,
softcap
,
alibi_slopes
,
alibi_slopes
,
deterministic
,
deterministic
,
return_attn_probs
,
return_attn_probs
,
...
@@ -1270,4 +1270,4 @@ def flash_attn_with_kvcache(
...
@@ -1270,4 +1270,4 @@ def flash_attn_with_kvcache(
rotary_interleaved
,
rotary_interleaved
,
num_splits
,
num_splits
,
)
)
return
(
out
,
softmax_lse
)
if
return_softmax_lse
else
out
return
(
out
,
softmax_lse
)
if
return_softmax_lse
else
out
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment