Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
4d87e4d875077ad9efd25030efa4ab0ba92c19e1
Switch branch/tag
flash-attention
csrc
ft_attention
15 Mar, 2023
1 commit
Support H100 for other CUDA extensions
· dc08ea1c
Tri Dao
authored
Mar 15, 2023
dc08ea1c
15 Jan, 2023
2 commits
[Gen] Pass qkv_stride to ft_attention kernel for batched generation
· f1e01c27
Tri Dao
authored
Jan 15, 2023
f1e01c27
[Gen] Make generation work with Tensor Parallel
· 7c219154
Tri Dao
authored
Jan 15, 2023
7c219154
04 Jan, 2023
3 commits
[Gen, FT] Use fp32 accum for FMA
· be1afaa2
Tri Dao
authored
Jan 03, 2023
be1afaa2
[Gen, FT] Use tlength instead of params.timestep for rotary
· f266fc72
Tri Dao
authored
Jan 03, 2023
f266fc72
[Gen] Add kernel from FasterTransformer for benchmarking
· a01d1213
Tri Dao
authored
Jan 03, 2023
a01d1213