Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
35d589fa81a68b7cb806982af4fafac0f19d644d
Switch branch/tag
flash-attention
flash_attn
flash_attn_interface.py
14 Oct, 2022
2 commits
Fix QKV interface to allocate output in Python
· 1b9facac
Tri Dao
authored
Oct 14, 2022
1b9facac
Implement attention kernel that splits the batch into two
· 5badfb78
Tri Dao
authored
Oct 13, 2022
5badfb78
04 Jul, 2022
2 commits
Do P * dP (pointwise) in the bwd in fp32 instead of fp16
· a5559a0e
Tri Dao
authored
Jul 03, 2022
a5559a0e
Implement cross attention
· 6c3a8c65
Tri Dao
authored
Jun 30, 2022
6c3a8c65
02 Jun, 2022
1 commit
Rename src -> flash_attn
· 5a61cb77
Tri Dao
authored
Jun 01, 2022
5a61cb77
29 May, 2022
1 commit
Reorganize directories, add banner figure
· 67c37795
Tri Dao
authored
May 29, 2022
67c37795
26 May, 2022
1 commit
Rename, add benchmarking script
· 9dbc491a
Tri Dao
authored
May 26, 2022
9dbc491a
20 May, 2022
1 commit
First release
· 1fcbe6f0
Tri Dao
authored
May 20, 2022
1fcbe6f0