Request and model concurrency
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
Showing
gpu/cuda_common.go
0 → 100644
gpu/gpu_info_nvml.c
deleted
100644 → 0
gpu/gpu_info_nvml.h
deleted
100644 → 0
Please register or sign in to comment