"examples/lm_finetuning/simple_lm_finetuning.py" did not exist on "51690699976ee47bfce0765521272c78261cdbda"
  • Daniel Hiltgen's avatar
    Request and model concurrency · 34b9db5a
    Daniel Hiltgen authored
    This change adds support for multiple concurrent requests, as well as
    loading multiple models by spawning multiple runners. The default
    settings are currently set at 1 concurrent request per model and only 1
    loaded model at a time, but these can be adjusted by setting
    OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
    34b9db5a
amd_common.go 2.58 KB