"tests/schedulers/test_scheduler_unclip.py" did not exist on "6a7a5467cab6df8bb24b20a7ad3f2223c1a2e8de"
-
Daniel Hiltgen authored
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
34b9db5a