- 14 Jun, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Still not complete, needs some refinement to our prediction to understand the discrete GPUs available space so we can see how many layers fit in each one since we can't split one layer across multiple GPUs we can't treat free space as one logical block
-
- 04 Jun, 2024 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 24 May, 2024 1 commit
-
-
Patrick Devine authored
-
- 13 May, 2024 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 10 May, 2024 1 commit
-
-
Jeffrey Morgan authored
* dont clamp ctx size in `PredictServerFit` * minimum 4 context * remove context warning
-
- 08 May, 2024 1 commit
-
-
Daniel Hiltgen authored
This records more GPU usage information for eventual UX inclusion.
-
- 07 May, 2024 1 commit
-
-
Michael Yang authored
-
- 05 May, 2024 1 commit
-
-
Daniel Hiltgen authored
This moves all the env var reading into one central module and logs the loaded config once at startup which should help in troubleshooting user server logs
-
- 01 May, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 26 Apr, 2024 1 commit
-
-
Michael Yang authored
-
- 25 Apr, 2024 1 commit
-
-
Michael Yang authored
-
- 24 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
If we get our predictions wrong, this can be used to set a lower memory limit as a workaround. Recent multi-gpu refactoring accidentally removed it, so this adds it back.
-
- 23 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
-