Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
"llm/patches/0005-default-pretokenizer.patch" did not exist on "0fc0cfc6d2133e2e8515faeadd8ed436a6062b09"
0bf5e50038ee341ece03bfd0c8ff45a6c57aed5a
Switch branch/tag
flash-attention
csrc
layer_norm
README.md
29 Nov, 2022
1 commit
Release training code
· 0bf5e500
Tri Dao
authored
Nov 28, 2022
0bf5e500
15 Nov, 2022
1 commit
Mention that some CUDA extensions have only been tested on A100s
· 43ab0b52
Tri Dao
authored
Nov 15, 2022
43ab0b52
14 Nov, 2022
2 commits
Add GPT and ViT models
· 2e33fc8e
Tri Dao
authored
Nov 13, 2022
2e33fc8e
Add fused_dense and dropout_add_layernorm CUDA extensions
· fa6d1ce4
Tri Dao
authored
Nov 13, 2022
fa6d1ce4