Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
1aa6d7d9b60bf8fbb5584f057934bdee15ed33fe
Switch branch/tag
flash-attention
csrc
flash_attn
src
fmha
softmax.h
21 Oct, 2022
1 commit
Rework dropout to decouple forward and backward
· 1aa6d7d9
Tri Dao
authored
Oct 18, 2022
They don't have to have the same block size, number of threads, etc.
1aa6d7d9
10 Jul, 2022
1 commit
Refactor to template on __half, implement bf16 util functions
· e518a4b3
Tri Dao
authored
Jul 08, 2022
e518a4b3
02 Jun, 2022
1 commit
Remove softmax fp16 max
· 05087332
Tri Dao
authored
Jun 02, 2022
05087332
26 May, 2022
1 commit
Rename, add benchmarking script
· 9dbc491a
Tri Dao
authored
May 26, 2022
9dbc491a
20 May, 2022
1 commit
First release
· 1fcbe6f0
Tri Dao
authored
May 20, 2022
1fcbe6f0