- 11 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This reduces the built-in linux version to not use any vector extensions which enables the resulting builds to run under Rosetta on MacOS in Docker. Then at runtime it checks for the actual CPU vector extensions and loads the best CPU library available
-
- 10 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This can help speed up incremental builds when you're only testing one archicture, like amd64. E.g. BUILD_ARCH=amd64 ./scripts/build_linux.sh && scp ./dist/ollama-linux-amd64 test-system:
-
- 05 Jan, 2024 1 commit
-
-
Michael Yang authored
-
- 03 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 22 Dec, 2023 3 commits
-
-
Jeffrey Morgan authored
-
Daniel Hiltgen authored
By default builds will now produce non-debug and non-verbose binaries. To enable verbose logs in llama.cpp and debug symbols in the native code, set `CGO_CFLAGS=-g`
-
Daniel Hiltgen authored
-
- 19 Dec, 2023 3 commits
-
-
Daniel Hiltgen authored
If someone checks out the ollama repo and doesn't install the CUDA library, this will ensure they can build a CPU only version
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
-
- 29 Sep, 2023 1 commit
-
-
Michael Yang authored
-
- 22 Sep, 2023 1 commit
-
-
Jeffrey Morgan authored
Add `Dockerfile.build` for building linux binaries --------- Co-authored-by:Michael Yang <mxyng@pm.me>
-