- 19 Jun, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 04 Jun, 2024 1 commit
-
-
Michael Yang authored
-
- 29 Apr, 2024 1 commit
-
-
Jeffrey Morgan authored
* app: restart server on failure * fix linter * address comments * refactor log directory creation to be where logs are written * check all log dir creation errors
-
- 14 Apr, 2024 1 commit
-
-
Jeffrey Morgan authored
* app: gracefully shut down `ollama serve` on windows * fix linter errors * bring back `HideWindow` * remove creation flags * restore `windows.CREATE_NEW_PROCESS_GROUP`
-
- 01 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
-
- 26 Mar, 2024 1 commit
-
-
Patrick Devine authored
-
- 16 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
Also fixes a few fit-and-finish items for better developer experience
-
- 15 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
This focuses on Windows first, but coudl be used for Mac and possibly linux in the future.
-