Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
dff68c2b228234e34714a6cb1b966cb3a09496b9
Switch branch/tag
flash-attention
flash_attn
modules
mlp.py
23 Dec, 2022
1 commit
Simplify FusedDense
· e68ebbe8
Tri Dao
authored
Dec 22, 2022
e68ebbe8
20 Dec, 2022
1 commit
Implement last_layer_subset optimization for BERT
· 13cdceb3
Tri Dao
authored
Dec 19, 2022
13cdceb3
23 Nov, 2022
1 commit
[ViT] Use dropout_add_ln for the 1st layer norm
· 1feb9426
Tri Dao
authored
Nov 23, 2022
1feb9426
14 Nov, 2022
1 commit
Add MLP, MHA, Block, Embedding modules
· d4b320b3
Tri Dao
authored
Nov 13, 2022
d4b320b3