Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
450b64fe
Commit
450b64fe
authored
Jun 27, 2022
by
Tri Dao
Browse files
Add README section on issues
parent
c0daa62e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
10 additions
and
0 deletions
+10
-0
README.md
README.md
+10
-0
No files found.
README.md
View file @
450b64fe
...
@@ -104,6 +104,16 @@ T4 GPUs are commonly used for inference, so we also measure speedup on the forwa
...
@@ -104,6 +104,16 @@ T4 GPUs are commonly used for inference, so we also measure speedup on the forwa
We see speedups between 2.5x-4.5x on the forward pass.
We see speedups between 2.5x-4.5x on the forward pass.
## When you encounter issues
This alpha release of FlashAttention contains code written for a research
project to validate ideas on speeding up attention.
We have tested it on several models (BERT, GPT2, ViT).
However, there might still be bugs in the implementation that we hope to iron
out in the next few months.
If you encounter any of these bugs, please open a respective GitHub Issue!
## Acknowledgments
## Acknowledgments
Our implementation uses Apex's
Our implementation uses Apex's
[
FMHA
](
https://github.com/NVIDIA/apex/tree/master/apex/contrib/csrc/fmha
)
code
[
FMHA
](
https://github.com/NVIDIA/apex/tree/master/apex/contrib/csrc/fmha
)
code
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment