Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
ff34123bd426bcc3ca0d1a11b6173652fb84d033
Switch branch/tag
flash-attention
flash_attn
modules
block.py
16 Jan, 2023
1 commit
Reorder LN in Block, support OPT
· ff34123b
Tri Dao
authored
Jan 15, 2023
ff34123b
07 Jan, 2023
1 commit
[TP] Implement TensorParallel without sequence parallel
· 93383bd5
Tri Dao
authored
Jan 07, 2023
93383bd5
25 Dec, 2022
1 commit
Implement Tensor Parallel for transformer Block
· a8cfe515
Tri Dao
authored
Dec 25, 2022
a8cfe515
19 Dec, 2022
1 commit
Implement BERT
· 5fb6df0e
Tri Dao
authored
Dec 18, 2022
5fb6df0e
14 Nov, 2022
1 commit
Add MLP, MHA, Block, Embedding modules
· d4b320b3
Tri Dao
authored
Nov 13, 2022
d4b320b3