- 11 Mar, 2025 1 commit
-
-
Michael Yang authored
-
- 10 Mar, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 07 Mar, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 04 Mar, 2025 1 commit
-
-
Michael Yang authored
- output backend system info when initializing the backend. this ensures this information is always present without needing to be called explicitly - convert to structured logging - enumerate devices rather than backends since devices are ordered - track device indices grouped by device name
-
- 03 Mar, 2025 1 commit
-
-
Michael Yang authored
expand backend loading error handling to catch more problems and log them instead of panicing
-
- 28 Feb, 2025 2 commits
-
-
Michael Yang authored
-
Jeffrey Morgan authored
-
- 27 Feb, 2025 2 commits
-
-
Michael Yang authored
-
Jeffrey Morgan authored
-
- 25 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 24 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 20 Feb, 2025 1 commit
-
-
Michael Yang authored
-
- 19 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 18 Feb, 2025 1 commit
-
-
Michael Yang authored
sapphire rapids has amx support but it ends up having a negative performance impact. emerald rapids also has amx support with a positive performance impact however there's no reasonable way in ggml to differentiate between the two. the impact is small (~6%) so disable amx entirely for simplicity
-
- 14 Feb, 2025 2 commits
-
-
Jeffrey Morgan authored
-
Jesse Gross authored
This provides integration with the new Ollama engine (58245413 next ollama runner (#7913)) and the rest of the Ollama infrastructure such as the runner and Ollama server. In addition, it also builds out the KV cache infrastructure to support requirements of how Ollama runs models such as: - Parallel processing - Memory management for defragmentation and shifting - Multi-modal modals Both old and new engines continue to be supported. By default, only the old engine is used. To enable the new engine: Start the server with the OLLAMA_NEW_ENGINE environment variable set: OLLAMA_NEW_ENGINE=1 ./ollama serve Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M: ./ollama run jessegross/llama3.1
-
- 11 Feb, 2025 1 commit
-
-
Michael Yang authored
* wrap ggml_backend_load_best in try/catch * ignore non-ollama paths
-
- 10 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 07 Feb, 2025 1 commit
-
-
Azis Alvriyanto authored
-
- 06 Feb, 2025 2 commits
-
-
Diego Pereira authored
Shield the code processing the embedding result from subsequent calls that may overwrite the same buffer to process a second input when retrieving model embeddings.
-
Michael Yang authored
* chore: update gitattributes * chore: add build info source
-
- 05 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 31 Jan, 2025 1 commit
-
-
Michael Yang authored
This reverts commit bea1f1fa.
-
- 30 Jan, 2025 1 commit
-
-
Michael Yang authored
-
- 29 Jan, 2025 1 commit
-
-
Michael Yang authored
* add build to .dockerignore * test: only build one arch * add build to .gitignore * fix ccache path * filter amdgpu targets * only filter if autodetecting * Don't clobber gpu list for default runner This ensures the GPU specific environment variables are set properly * explicitly set CXX compiler for HIP * Update build_windows.ps1 This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset. * build: add ollama subdir * add .git to .dockerignore * docs: update development.md * update build_darwin.sh * remove unused scripts * llm: add cwd and build/lib/ollama to library paths * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS * add additional cmake output vars for msvc * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12 * remove unncessary filepath.Dir, cleanup * add hardware-specific directory to path * use absolute server path * build: linux arm * cmake install targets * remove unused files * ml: visit each library path once * build: skip cpu variants on arm * build: install cpu targets * build: fix workflow * shorter names * fix rocblas install * docs: clean up development.md * consistent build dir removal in development.md * silence -Wimplicit-function-declaration build warnings in ggml-cpu * update readme * update development readme * llm: update library lookup logic now that there is one runner (#8587) * tweak development.md * update docs * add windows cuda/rocm tests --------- Co-authored-by:
jmorganca <jmorganca@gmail.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com>
-
- 14 Jan, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 08 Jan, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 04 Jan, 2025 1 commit
-
-
Ubaldo Porcheddu authored
-
- 19 Dec, 2024 1 commit
-
-
Parth Sareen authored
This change adds a test to catch a regression in schema_to_grammar where the order of keys in the JSON schema is not preserved in the generated grammar, which is critical for step-by-step reasoning.
-
- 17 Dec, 2024 1 commit
-
-
Jesse Gross authored
Sometimes the KV cache requires defragmentation even without triggering the threshold heuristic. In this case, decoding will not being able to find a KV cache slot. This is particularly difficult for the caller to handle if it happens in between ubatches. To avoid this, we should immediately trigger a defrag. In addition, a heavily fragmented cache can require more than max_moves to defragment. Currently, we stop when we hit the limit but this can leave a cache that still does not have adequate space even after defragmentation is triggered. Instead, we should do multiple batches of processing until everything is complete. Fixes #7949
-
- 14 Dec, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 13 Dec, 2024 1 commit
-
-
Daniel Hiltgen authored
This puts the low-level runner logging back on stderr for consistency with prior releases
-
- 12 Dec, 2024 2 commits
-
-
Pascal Patry authored
-
Parth Sareen authored
-
- 11 Dec, 2024 2 commits
-
-
Blake Mizerany authored
Previously we decoded and re-encoded JSON schemas during validation, which served no purpose since json.RawMessage already validates JSON syntax. Worse, the re-encoding lost field ordering from the original schema, which affects inference quality during step-by-step reasoning. While fixing this ordering issue by using json.RawMessage directly, testing revealed that schema_to_grammar (from llama.cpp) also fails to preserve field order during grammar generation. This appears to be the root cause of inference degradation. This change prevents us from mangling the user's original schema order, but we still need to address the ordering issue in schema_to_grammar. That will be a separate change. Updates #7978
-
Jeffrey Morgan authored
-
- 10 Dec, 2024 2 commits
-
-
Daniel Hiltgen authored
The final implementation of #7499 removed dynamic vector requirements in favor of a simpler filename based model, and this was left over logic that is no longer needed.
-
Daniel Hiltgen authored
* llama: wire up builtin runner This adds a new entrypoint into the ollama CLI to run the cgo built runner. On Mac arm64, this will have GPU support, but on all other platforms it will be the lowest common denominator CPU build. After we fully transition to the new Go runners more tech-debt can be removed and we can stop building the "default" runner via make and rely on the builtin always. * build: Make target improvements Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable, and finally payloads. * Support customized CPU flags for runners This implements a simplified custom CPU flags pattern for the runners. When built without overrides, the runner name contains the vector flag we check for (AVX) to ensure we don't try to run on unsupported systems and crash. If the user builds a customized set, we omit the naming scheme and don't check for compatibility. This avoids checking requirements at runtime, so that logic has been removed as well. This can be used to build GPU runners with no vector flags, or CPU/GPU runners with additional flags (e.g. AVX512) enabled. * Use relative paths If the user checks out the repo in a path that contains spaces, make gets really confused so use relative paths for everything in-repo to avoid breakage. * Remove payloads from main binary * install: clean up prior libraries This removes support for v0.3.6 and older versions (before the tar bundle) and ensures we clean up prior libraries before extracting the bundle(s). Without this change, runners and dependent libraries could leak when we update and lead to subtle runtime errors.
-
- 05 Dec, 2024 1 commit
-
-
Parth Sareen authored
Adds structured outputs to chat endpoint --------- Co-authored-by:
Michael Yang <mxyng@pm.me> Co-authored-by:
Hieu Nguyen <hieunguyen1053@outlook.com>
-
- 03 Dec, 2024 1 commit
-
-
Sam authored
-