Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
1aa6d7d9b60bf8fbb5584f057934bdee15ed33fe
Switch branch/tag
flash-attention
tests
21 Oct, 2022
1 commit
Rework dropout to decouple forward and backward
· 1aa6d7d9
Tri Dao
authored
Oct 18, 2022
They don't have to have the same block size, number of threads, etc.
1aa6d7d9
16 Oct, 2022
1 commit
Fix #54: set device for multi-GPU case
· 52fb4b72
Tri Dao
authored
Oct 16, 2022
52fb4b72
14 Oct, 2022
1 commit
Implement attention kernel that splits the batch into two
· 5badfb78
Tri Dao
authored
Oct 13, 2022
5badfb78
05 Oct, 2022
1 commit
Only run backward test for d=128 on A100
· 0c01568d
Tri Dao
authored
Oct 04, 2022
0c01568d
22 Jul, 2022
1 commit
Add tests for numerical error
· 2ed471ec
Tri Dao
authored
Jul 22, 2022
2ed471ec