Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
"ollama/llm/llama.cpp/ggml/include/ggml-vulkan.h" did not exist on "ff27a8172ae24bbcff76eec4220c3081852c201b"
d509832426ddaaa6263640b5040962e334fdbbe3
Switch branch/tag
flash-attention
flash_attn
utils
pretrained.py
07 Jan, 2023
1 commit
[Gen] Test generation with rotary embedding
· 11be742a
Tri Dao
authored
Jan 07, 2023
11be742a
27 Dec, 2022
1 commit
Tweak CrossEntropyLoss to take process_group in init
· c6ecd40a
Tri Dao
authored
Dec 27, 2022
c6ecd40a