- 08 Apr, 2025 1 commit
-
-
frob authored
* cleanup: remove OLLAMA_TMPDIR * cleanup: ollama doesn't use temporary executables anymore --------- Co-authored-by:Richard Lyons <frob@cloudstaff.com>
-
- 01 Apr, 2025 1 commit
-
-
Bruce MacDonald authored
With support for multimodal models becoming more varied and common it is important for clients to be able to easily see what capabilities a model has. Retuning these from the show endpoint will allow clients to easily see what a model can do.
-
- 27 Mar, 2025 1 commit
-
-
Parth Sareen authored
-
- 25 Mar, 2025 1 commit
-
-
copeland3300 authored
-
- 21 Mar, 2025 2 commits
-
-
Bruce MacDonald authored
-
Parth Sareen authored
-
- 13 Mar, 2025 1 commit
-
-
Bradley Erickson authored
-
- 10 Mar, 2025 1 commit
-
-
frob authored
-
- 07 Mar, 2025 1 commit
-
-
rekcäH nitraM authored
The problem with default.target is that it always points to the target that is currently started. So if you boot into single user mode or the rescue mode still Ollama tries to start. I noticed this because either tried (and failed) to start all the time during a system update, where Ollama definitely is not wanted.
-
- 05 Mar, 2025 1 commit
-
-
Daniel Hiltgen authored
To stay under the 2G github artifact limit, we're splitting ROCm out like we do on linux.
-
- 04 Mar, 2025 1 commit
-
-
Blake Mizerany authored
Previously, developers without the synctest experiment enabled would see build failures when running tests in some server/internal/internal packages using the synctest package. This change makes the transition to use of the package less painful but guards the use of the synctest package with build tags. synctest is enabled in CI. If a new change will break a synctest package, it will break in CI, even if it does not break locally. The developer docs have been updated to help with any confusion about why package tests pass locally but fail in CI.
-
- 27 Feb, 2025 1 commit
-
-
Daniel Hiltgen authored
* Windows ARM build Skip cmake, and note it's unused in the developer docs. * Win: only check for ninja when we need it On windows ARM, the cim lookup fails, but we don't need ninja anyway.
-
- 25 Feb, 2025 2 commits
-
-
Chuanhui Liu authored
-
frob authored
Co-authored-by:Richard Lyons <frob@cloudstaff.com>
-
- 22 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 15 Feb, 2025 1 commit
-
-
James-William-Kincaid-III authored
-
- 13 Feb, 2025 1 commit
-
-
frob authored
Co-authored-by:Richard Lyons <frob@cloudstaff.com>
-
- 08 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 07 Feb, 2025 2 commits
-
-
Azis Alvriyanto authored
-
Leisure Linux authored
-
- 06 Feb, 2025 1 commit
-
-
Abhinav Pant authored
-
- 05 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 03 Feb, 2025 1 commit
-
-
Melroy van den Berg authored
-
- 02 Feb, 2025 1 commit
-
-
Davide Bertoni authored
-
- 29 Jan, 2025 2 commits
-
-
Parth Sareen authored
-
Michael Yang authored
* add build to .dockerignore * test: only build one arch * add build to .gitignore * fix ccache path * filter amdgpu targets * only filter if autodetecting * Don't clobber gpu list for default runner This ensures the GPU specific environment variables are set properly * explicitly set CXX compiler for HIP * Update build_windows.ps1 This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset. * build: add ollama subdir * add .git to .dockerignore * docs: update development.md * update build_darwin.sh * remove unused scripts * llm: add cwd and build/lib/ollama to library paths * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS * add additional cmake output vars for msvc * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12 * remove unncessary filepath.Dir, cleanup * add hardware-specific directory to path * use absolute server path * build: linux arm * cmake install targets * remove unused files * ml: visit each library path once * build: skip cpu variants on arm * build: install cpu targets * build: fix workflow * shorter names * fix rocblas install * docs: clean up development.md * consistent build dir removal in development.md * silence -Wimplicit-function-declaration build warnings in ggml-cpu * update readme * update development readme * llm: update library lookup logic now that there is one runner (#8587) * tweak development.md * update docs * add windows cuda/rocm tests --------- Co-authored-by:
jmorganca <jmorganca@gmail.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com>
-
- 23 Jan, 2025 1 commit
-
-
Daniel Jalkut authored
-
- 21 Jan, 2025 1 commit
-
-
frob authored
-
- 20 Jan, 2025 1 commit
-
-
EndoTheDev authored
-
- 15 Jan, 2025 1 commit
-
-
Gloryjaw authored
-
- 14 Jan, 2025 1 commit
-
-
Patrick Devine authored
-
- 13 Jan, 2025 1 commit
-
-
Parth Sareen authored
-
- 29 Dec, 2024 1 commit
-
-
Anas Khan authored
Co-authored-by:Jeffrey Morgan <jmorganca@gmail.com>
-
- 27 Dec, 2024 1 commit
-
-
CIIDMike authored
-
- 20 Dec, 2024 1 commit
-
-
Patrick Devine authored
-
- 13 Dec, 2024 1 commit
-
-
Anuraag (Rag) Agrawal authored
* openai: return usage as final chunk for streams --------- Co-authored-by:ParthSareen <parth.sareen@ollama.com>
-
- 11 Dec, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 10 Dec, 2024 3 commits
-
-
Stefan Weil authored
-
Daniel Hiltgen authored
The "F" was missing.
-
Daniel Hiltgen authored
* llama: wire up builtin runner This adds a new entrypoint into the ollama CLI to run the cgo built runner. On Mac arm64, this will have GPU support, but on all other platforms it will be the lowest common denominator CPU build. After we fully transition to the new Go runners more tech-debt can be removed and we can stop building the "default" runner via make and rely on the builtin always. * build: Make target improvements Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable, and finally payloads. * Support customized CPU flags for runners This implements a simplified custom CPU flags pattern for the runners. When built without overrides, the runner name contains the vector flag we check for (AVX) to ensure we don't try to run on unsupported systems and crash. If the user builds a customized set, we omit the naming scheme and don't check for compatibility. This avoids checking requirements at runtime, so that logic has been removed as well. This can be used to build GPU runners with no vector flags, or CPU/GPU runners with additional flags (e.g. AVX512) enabled. * Use relative paths If the user checks out the repo in a path that contains spaces, make gets really confused so use relative paths for everything in-repo to avoid breakage. * Remove payloads from main binary * install: clean up prior libraries This removes support for v0.3.6 and older versions (before the tar bundle) and ensures we clean up prior libraries before extracting the bundle(s). Without this change, runners and dependent libraries could leak when we update and lead to subtle runtime errors.
-