Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
71f674ae23e69af55b09ea75d81ee1b5010f9244
Switch branch/tag
flash-attention
flash_attn
layers
rotary.py
17 Nov, 2022
1 commit
[Rotary] Customize base, support seqlen_offset
· 71f674ae
Tri Dao
authored
Nov 17, 2022
71f674ae
14 Nov, 2022
1 commit
Add MLP, MHA, Block, Embedding modules
· d4b320b3
Tri Dao
authored
Nov 13, 2022
d4b320b3
05 Nov, 2022
1 commit
Implement rotary embedding in CUDA
· ca81f32e
Tri Dao
authored
Nov 04, 2022
ca81f32e
02 Jun, 2022
1 commit
Rename src -> flash_attn
· 5a61cb77
Tri Dao
authored
Jun 01, 2022
5a61cb77
29 May, 2022
1 commit
Reorganize directories, add banner figure
· 67c37795
Tri Dao
authored
May 29, 2022
67c37795
20 May, 2022
1 commit
First release
· 1fcbe6f0
Tri Dao
authored
May 20, 2022
1fcbe6f0