Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
b252072409e69c25f2b9d473cc534e49b24decd2
Switch branch/tag
flash-attention
tests
models
test_llama.py
02 Jul, 2023
1 commit
[Rotary] Make sure frequency calculation is in fp32
· 62e98144
Tri Dao
authored
Jul 02, 2023
62e98144
05 May, 2023
1 commit
[LLaMa] Fix last norm layer to use RMSNorm instead of LayerNorm
· a9a4b4e4
Tri Dao
authored
May 04, 2023
a9a4b4e4
19 Apr, 2023
1 commit
Implement LLaMa
· 96d10f65
Tri Dao
authored
Apr 18, 2023
96d10f65