- 12 Mar, 2024 1 commit
-
-
racerole authored
Signed-off-by:racerole <jiangyifeng@outlook.com>
-
- 11 Mar, 2024 2 commits
-
-
Bruce MacDonald authored
-
Jeffrey Morgan authored
-
- 09 Mar, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 08 Mar, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 01 Mar, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 20 Feb, 2024 2 commits
-
-
Jeffrey Morgan authored
-
Taras Tsugrii authored
-
- 14 Feb, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 09 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
Make sure that when a shutdown signal comes, we shutdown quickly instead of waiting for a potentially long exchange to wrap up.
-
- 31 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This requires an upstream change to support graceful termination, carried as a patch.
-
- 22 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This wires up logging in llama.cpp to always go to stderr, and also turns up logging if OLLAMA_DEBUG is set.
-
- 21 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
Detect potential error scenarios so we can fallback to CPU mode without hitting asserts.
-
- 14 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 10 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
* update submodule to `6efb8eb30e7025b168f3fda3ff83b9b386428ad6` * unblock condition variable in `update_slots` when closing server
-
- 07 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 04 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 02 Jan, 2024 2 commits
-
-
Daniel Hiltgen authored
This one log line was triggering a single line llama.log to be generated in the pwd of the server
-
Daniel Hiltgen authored
This changes the model for llama.cpp inclusion so we're not applying a patch, but instead have the C++ code directly in the ollama tree, which should make it easier to refine and update over time.
-