Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
9818f85fee29ac6b60c9214bce841f8109a18b1b
Switch branch/tag
flash-attention
csrc
ft_attention
decoder_masked_multihead_attention.cu
21 Apr, 2023
1 commit
[Gen] Fix FT kernel smem size, CG when batch size changed
· 311d6606
Tri Dao
authored
Apr 20, 2023
311d6606
04 Jan, 2023
1 commit
[Gen] Add kernel from FasterTransformer for benchmarking
· a01d1213
Tri Dao
authored
Jan 03, 2023
a01d1213