- 01 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
-
- 25 Mar, 2024 1 commit
-
-
Jeremy authored
-
- 12 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 07 Mar, 2024 1 commit
-
-
John authored
Signed-off-by:hishope <csqiye@126.com>
-
- 29 Feb, 2024 1 commit
-
-
Bernhard M. Wiedemann authored
See https://reproducible-builds.org/ for why this is good. This patch was done while working on reproducible builds for openSUSE.
-
- 02 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
Only apply patches if we have any, and make sure to cleanup every file we patched at the end to leave the tree clean
-
- 25 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
* Fix clearing kv cache between requests with the same prompt * fix powershell script
-
- 20 Jan, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Jeffrey Morgan authored
-
- 19 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 17 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This also refines the build process for the ext_server build.
-
- 13 Jan, 2024 2 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 11 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This reduces the built-in linux version to not use any vector extensions which enables the resulting builds to run under Rosetta on MacOS in Docker. Then at runtime it checks for the actual CPU vector extensions and loads the best CPU library available
-
- 05 Jan, 2024 1 commit
-
-
Bruce MacDonald authored
-
- 04 Jan, 2024 3 commits
-
-
Daniel Hiltgen authored
If the tree has a stale submodule, make sure we clean it up first
-
Daniel Hiltgen authored
-
Jeffrey Morgan authored
* update cmake flags for intel macOS * remove `LLAMA_K_QUANTS` * put back `CMAKE_OSX_DEPLOYMENT_TARGET` and disable `LLAMA_F16C`
-
- 02 Jan, 2024 3 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Refactor where we store build outputs, and support a fully dynamic loading model on windows so the base executable has no special dependencies thus doesn't require a special PATH.
-
Daniel Hiltgen authored
This changes the model for llama.cpp inclusion so we're not applying a patch, but instead have the C++ code directly in the ollama tree, which should make it easier to refine and update over time.
-
- 22 Dec, 2023 2 commits
-
-
Daniel Hiltgen authored
By default builds will now produce non-debug and non-verbose binaries. To enable verbose logs in llama.cpp and debug symbols in the native code, set `CGO_CFLAGS=-g`
-
Daniel Hiltgen authored
-
- 20 Dec, 2023 1 commit
-
-
Daniel Hiltgen authored
This switches the default llama.cpp to be CPU based, and builds the GPU variants as dynamically loaded libraries which we can select at runtime. This also bumps the ROCm library to version 6 given 5.7 builds don't work on the latest ROCm library that just shipped.
-
- 19 Dec, 2023 3 commits
-
-
Daniel Hiltgen authored
This changes the container-based linux build to use an older Ubuntu distro to improve our compatibility matrix for older user machines
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
-