Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
kecinstone
2024pra-vllm
Commits
14f0b39cda15b948e0d5e7c87c4cef09e8db240a
Switch branch/tag
2024-pra-vllm
vllm
engine
22 Jun, 2023
1 commit
[Bugfix] Fix a bug in RequestOutput.finished (#202)
· 14f0b39c
Woosuk Kwon
authored
Jun 22, 2023
14f0b39c
21 Jun, 2023
1 commit
fix-ray (#193)
· 2e0d3143
Zhuohan Li
authored
Jun 22, 2023
2e0d3143
20 Jun, 2023
1 commit
Use slow tokenizer for open llama models (#168)
· 67d96c29
Woosuk Kwon
authored
Jun 19, 2023
67d96c29
18 Jun, 2023
1 commit
Reduce GPU memory utilization to make sure OOM doesn't happen (#153)
· bf5f121c
Zhuohan Li
authored
Jun 18, 2023
bf5f121c
17 Jun, 2023
1 commit
Change the name to vLLM (#150)
· 0b98ba15
Woosuk Kwon
authored
Jun 17, 2023
0b98ba15