"torchvision/csrc/io/image/image.cpp" did not exist on "6e639d3e49371a509235201cb3f335b1c5cac0e3"
Request and model concurrency
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
Showing
llm/memory.go
0 → 100644
server/sched.go
0 → 100644
server/sched_test.go
0 → 100644
Please register or sign in to comment