- 22 Jul, 2024 2 commits
-
-
Michael Yang authored
-
Daniel Hiltgen authored
The OLLAMA_MAX_VRAM env var was a temporary workaround for OOM scenarios. With Concurrency this was no longer wired up, and the simplistic value doesn't map to multi-GPU setups. Users can still set `num_gpu` to limit memory usage to avoid OOM if we get our predictions wrong.
-
- 15 Jul, 2024 1 commit
-
-
royjhan authored
* Initial Batch Embedding * Revert "Initial Batch Embedding" This reverts commit c22d54895a280b54c727279d85a5fc94defb5a29. * Initial Draft * mock up notes * api/embed draft * add server function * check normalization * clean up * normalization * playing around with truncate stuff * Truncation * Truncation * move normalization to go * Integration Test Template * Truncation Integration Tests * Clean up * use float32 * move normalize * move normalize test * refactoring * integration float32 * input handling and handler testing * Refactoring of legacy and new * clear comments * merge conflicts * touches * embedding type 64 * merge conflicts * fix hanging on single string * refactoring * test values * set context length * clean up * testing clean up * testing clean up * remove function closure * Revert "remove function closure" This reverts commit 55d48c6ed17abe42e7a122e69d603ef0c1506787. * remove function closure * remove redundant error check * clean up * more clean up * clean up
-
- 09 Jul, 2024 1 commit
-
-
Daniel Hiltgen authored
On the smaller GPUs, the initial model load of llama2 took over 30s (the default timeout for the DoGenerate helper)
-
- 14 Jun, 2024 4 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
adjust timing on some tests so they don't timeout on small/slow GPUs
-
Daniel Hiltgen authored
Still not complete, needs some refinement to our prediction to understand the discrete GPUs available space so we can see how many layers fit in each one since we can't split one layer across multiple GPUs we can't treat free space as one logical block
-
Daniel Hiltgen authored
This worked remotely but wound up trying to spawn multiple servers locally which doesn't work
-
- 16 May, 2024 1 commit
-
-
Daniel Hiltgen authored
This test needs to be able to adjust the queue size down from our default setting for a reliable test, so it needs to skip on remote test execution mode.
-
- 10 May, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 06 May, 2024 1 commit
-
-
Michael Yang authored
-
- 05 May, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 23 Apr, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
-
- 04 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
Confirmed this fails on 0.1.30 with known regression but passes on main
-
- 03 Apr, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 01 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
Cleaner shutdown logic, a bit of response hardening
-
- 26 Mar, 2024 1 commit
-
-
Patrick Devine authored
-
- 25 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
If images aren't present, pull them. Also fixes the expected responses
-
- 23 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
This uplevels the integration tests to run the server which can allow testing an existing server, or a remote server.
-