Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
6a2a16e9
Unverified
Commit
6a2a16e9
authored
Jun 30, 2024
by
cao lei
Committed by
GitHub
Jun 30, 2024
Browse files
fix typo (#974)
parent
5bf20196
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
tests/test_flash_attn.py
tests/test_flash_attn.py
+1
-1
No files found.
tests/test_flash_attn.py
View file @
6a2a16e9
...
...
@@ -233,7 +233,7 @@ def attention_ref(
window_size: (int, int), left and right window size
upcast: whether to cast all inputs to fp32, do all computation in fp32, then cast
output back to fp16/bf16.
reorder_ops: whether to change the order of operations (scaling k instead of scaling
k
, etc.)
reorder_ops: whether to change the order of operations (scaling k instead of scaling
q
, etc.)
without changing the math. This is to estimate the numerical error from operation
reordering.
Output:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment