Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
008951f1d94a29eaeee48d25dbfa11df1ba12413
Switch branch/tag
flash-attention
flash_attn
flash_attn_triton.py
31 Oct, 2022
5 commits
Support all head dimensions up to 128 in the Triton fwd
· 008951f1
Tri Dao
authored
Oct 30, 2022
008951f1
Support arbitrary seqlens (both q & k) in Triton bwd
· b910bf14
Tri Dao
authored
Oct 30, 2022
b910bf14
Support arbitrary seqlen_k in Triton bwd
· dc554693
Tri Dao
authored
Oct 30, 2022
dc554693
Fix Triton fwd to support seqlen not multiples of 128
· d11341fd
Tri Dao
authored
Oct 30, 2022
d11341fd
Implement FlashAttention in Triton
· b0c0db81
Tri Dao
authored
Oct 30, 2022
b0c0db81