Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
96d10f654527cc82c81022e16f77a8d9564f7eba
Switch branch/tag
flash-attention
flash_attn
models
opt.py
19 Apr, 2023
1 commit
Implement LLaMa
· 96d10f65
Tri Dao
authored
Apr 18, 2023
96d10f65
22 Mar, 2023
1 commit
Implement GPT-J
· 4d87e4d8
Tri Dao
authored
Mar 22, 2023
4d87e4d8
23 Jan, 2023
1 commit
[OPT] Load fp16 weights on CPU before moving to GPU
· 78b7a1dc
Tri Dao
authored
Jan 22, 2023
78b7a1dc
16 Jan, 2023
1 commit
Reorder LN in Block, support OPT
· ff34123b
Tri Dao
authored
Jan 15, 2023
ff34123b