Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
fa6d1ce44fc2c8f9fe6330b5e98697fdc434e729
Switch branch/tag
flash-attention
csrc
layer_norm
static_switch.h
14 Nov, 2022
1 commit
Add fused_dense and dropout_add_layernorm CUDA extensions
· fa6d1ce4
Tri Dao
authored
Nov 13, 2022
fa6d1ce4
14 Oct, 2022
1 commit
Implement attention kernel that splits the batch into two
· 5badfb78
Tri Dao
authored
Oct 13, 2022
5badfb78
10 Jul, 2022
1 commit
Implement for bf16
· de19de7a
Tri Dao
authored
Jul 09, 2022
de19de7a