- 04 Jan, 2024 8 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
* update cmake flags for intel macOS * remove `LLAMA_K_QUANTS` * put back `CMAKE_OSX_DEPLOYMENT_TARGET` and disable `LLAMA_F16C`
-
Daniel Hiltgen authored
Improve maintainability of Radeon card list
-
Daniel Hiltgen authored
Fail fast on WSL1 while allowing on WSL2
-
Daniel Hiltgen authored
Fix CPU only builds
-
Daniel Hiltgen authored
Go embed doesn't like when there's no matching files, so put a dummy placeholder in to allow building without any GPU support If no "server" library is found, it's safely ignored at runtime.
-
Daniel Hiltgen authored
This prevents users from accidentally installing on WSL1 with instructions guiding how to upgrade their WSL instance to version 2. Once running WSL2 if you have an NVIDIA card, you can follow their instructions to set up GPU passthrough and run models on the GPU. This is not possible on WSL1.
-
- 03 Jan, 2024 13 commits
-
-
Daniel Hiltgen authored
This moves the list of AMD GPUs to an easier to maintain list which should make it easier to update over time.
-
Daniel Hiltgen authored
Add ollama user to render group for Radeon support
-
Daniel Hiltgen authored
For the ROCm libraries to access the driver, we need to add the ollama user to the render group.
-
Jeffrey Morgan authored
-
Bruce MacDonald authored
-
Daniel Hiltgen authored
Fix windows system memory lookup
-
Daniel Hiltgen authored
This refines the gpu package error handling and fixes a bug with the system memory lookup on windows.
-
Daniel Hiltgen authored
Refactor how we augment llama.cpp and refine windows native build
-
Bruce MacDonald authored
-
Cole Gillespie authored
-
Jeffrey Morgan authored
-
Patrick Devine authored
-
Jeffrey Morgan authored
-
- 02 Jan, 2024 6 commits
-
-
Daniel Hiltgen authored
This one log line was triggering a single line llama.log to be generated in the pwd of the server
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Refactor where we store build outputs, and support a fully dynamic loading model on windows so the base executable has no special dependencies thus doesn't require a special PATH.
-
Daniel Hiltgen authored
This changes the model for llama.cpp inclusion so we're not applying a patch, but instead have the C++ code directly in the ollama tree, which should make it easier to refine and update over time.
-
Karim ElGhandour authored
-
Dane Madsen authored
-
- 27 Dec, 2023 3 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 25 Dec, 2023 1 commit
-
-
Icelain authored
-
- 24 Dec, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 23 Dec, 2023 3 commits
-
-
Jeffrey Morgan authored
-
Daniel Hiltgen authored
Guard integration tests with a tag
-
Daniel Hiltgen authored
This should help CI avoid running the integration test logic in a container where it's not currently possible.
-
- 22 Dec, 2023 5 commits
-
-
K0IN authored
-
Bruce MacDonald authored
-
Jeffrey Morgan authored
-
Matt Williams authored
update where are models stored q
-
Matt Williams authored
Signed-off-by:Matt Williams <m@technovangelist.com>
-