Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
2ed471ec
Commit
2ed471ec
authored
Jul 22, 2022
by
Tri Dao
Browse files
Add tests for numerical error
parent
42f54d88
Changes
2
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
678 additions
and
0 deletions
+678
-0
README.md
README.md
+11
-0
tests/test_flash_attn.py
tests/test_flash_attn.py
+667
-0
No files found.
README.md
View file @
2ed471ec
...
...
@@ -116,6 +116,17 @@ T4 GPUs are commonly used for inference, so we also measure speedup on the forwa
We see speedups between 2.5x-4.5x on the forward pass.
## Tests
We test that FlashAttention produces the same output and gradient as a reference
implementation, up to some numerical tolerance. In particular, we check that the
maximum numerical error of FlashAttention is at most twice the numerical error
of a baseline implementation in Pytorch (for different head dimensions, input
dtype, sequence length, causal / non-causal).
To run the tests:
```
pytest -q -s tests/test_flash_attn.py
```
## When you encounter issues
This alpha release of FlashAttention contains code written for a research
...
...
tests/test_flash_attn.py
0 → 100644
View file @
2ed471ec
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment