"tests/pipelines/cogview3/test_cogview3plus.py" did not exist on "960c149c777ea1584cd5584eac832ec9810b2632"
  • Daniel Hiltgen's avatar
    Request and model concurrency · 34b9db5a
    Daniel Hiltgen authored
    This change adds support for multiple concurrent requests, as well as
    loading multiple models by spawning multiple runners. The default
    settings are currently set at 1 concurrent request per model and only 1
    loaded model at a time, but these can be adjusted by setting
    OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
    34b9db5a
llm_image_test.go 59.3 KB