Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
8c6609ae1a6841263d56bcbdef2d2949de1d46ad
Switch branch/tag
flash-attention
flash_attn
ops
09 Dec, 2022
1 commit
[LayerNorm] Support all dimensions up to 6k (if divisible by 8)
· 8c6609ae
Tri Dao
authored
Dec 08, 2022
8c6609ae
18 Nov, 2022
1 commit
Add __init__.py files to subdirectories for installation
· ece539ab
Tri Dao
authored
Nov 17, 2022
ece539ab
14 Nov, 2022
3 commits
Add GPT and ViT models
· 2e33fc8e
Tri Dao
authored
Nov 13, 2022
2e33fc8e
Add MLP, MHA, Block, Embedding modules
· d4b320b3
Tri Dao
authored
Nov 13, 2022
d4b320b3
Add fused_dense and dropout_add_layernorm CUDA extensions
· fa6d1ce4
Tri Dao
authored
Nov 13, 2022
fa6d1ce4