- 04 Jan, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Go embed doesn't like when there's no matching files, so put a dummy placeholder in to allow building without any GPU support If no "server" library is found, it's safely ignored at runtime.
-
- 03 Jan, 2024 1 commit
-
-
Bruce MacDonald authored
-
- 02 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
Refactor where we store build outputs, and support a fully dynamic loading model on windows so the base executable has no special dependencies thus doesn't require a special PATH.
-
- 20 Dec, 2023 1 commit
-
-
Daniel Hiltgen authored
This switches the default llama.cpp to be CPU based, and builds the GPU variants as dynamically loaded libraries which we can select at runtime. This also bumps the ROCm library to version 6 given 5.7 builds don't work on the latest ROCm library that just shipped.
-
- 19 Dec, 2023 3 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
This allows the CPU only builds to work on systems with Radeon cards
-
Daniel Hiltgen authored
-