- 04 Aug, 2025 1 commit
-
-
Michael Yang authored
-
- 30 Jul, 2025 1 commit
-
-
Daniel Hiltgen authored
Support for bf16 was added in MacOS v14+ and attempting to enable on older versions causes runtime failures.
-
- 29 Jul, 2025 1 commit
-
-
Oliver Simons authored
* Enable CUDA Graphs for gemma3n. Similar to https://github.com/ggml-org/llama.cpp/pull/14741, though ollama has a slightly different model graph than llama.cpp which requires different workaround checks. * Remove residual check by reshaping differently in gemma3n model This should make the heuristics more robust
-
- 17 Jul, 2025 1 commit
-
-
Michael Yang authored
-
- 11 Jul, 2025 2 commits
-
-
Jesse Gross authored
Reporting params.NumGPULayers can be misleading because it is the requested number of layers, not the actual number that is loaded. While they are often the same, there are cases where they might mismatch, such as if the GPU backend is missing.
-
Jesse Gross authored
We're not currently using it, even in cases where we could. Disabling it improves generation performance by 10-30% with multiple GPUs.
-
- 09 Jul, 2025 1 commit
-
-
Jesse Gross authored
We don't get valid UUIDs for AMD GPUs on Windows, so the best option is to use the ordinal IDs. This brings us in line with what we currently do on the Ollama server - the only exception is AMD GPUs on Linux, which falls back to using ordinal IDs. The GGML implementation has no fallback but it doesn't appear to occur for any of the GPUs that we support. It's also possible that there are collisions between ordinal IDs for different libraries - however the only places where we use them are AMD on Windows and Metal on Mac, which can never occur on the same system.
-
- 07 Jul, 2025 1 commit
-
-
Jesse Gross authored
The root cause was an unclean upgrade - this code is fine. This reverts commit 45f216a9.
-
- 02 Jul, 2025 1 commit
-
-
Daniel Hiltgen authored
This adds some extra logs to make the new engine a bit more consistent with the llama engine.
-
- 27 Jun, 2025 1 commit
-
-
Jesse Gross authored
This is causing segfaults, so disable it. Currently UUIDs are only used for debugging purposes, although they planned to be used in additional ways in the future. Bug #11211
-
- 26 Jun, 2025 1 commit
-
-
Michael Yang authored
* update patches * cherry pick metal mean kernel * cherry pick cuda mean kernel * gemma3n
-
- 23 Jun, 2025 1 commit
-
-
Daniel Hiltgen authored
* Re-remove cuda v11 Revert the revert - drop v11 support requiring drivers newer than Feb 23 This reverts commit c6bcdc42. * Simplify layout With only one version of the GPU libraries, we can simplify things down somewhat. (Jetsons still require special handling) * distinct sbsa variant for linux arm64 This avoids accidentally trying to load the sbsa cuda libraries on a jetson system which results in crashes. * temporary prevent rocm+cuda mixed loading
-
- 20 Jun, 2025 1 commit
-
-
Jesse Gross authored
We don't check the return status after computing the graph, which can silently lead to bad outputs if we try to keep going and future computation succeeds. This appears to happens in certain cases on Apple M2 devices. Fixes #11070
-
- 18 Jun, 2025 2 commits
-
-
Jeffrey Morgan authored
Reverts PR #11115. The original change was mistakingly reverted instead of #10822
-
Jeffrey Morgan authored
This reverts commit aaa78180.
-
- 29 May, 2025 1 commit
-
-
Jesse Gross authored
This enables matching up devices and information reported by the backend with system management libraries such as nvml to get accurate free memory reporting.
-
- 24 May, 2025 1 commit
-
-
Jesse Gross authored
-
- 22 May, 2025 3 commits
-
-
Jesse Gross authored
FromFloatSlice and FromIntSlice return an error if the shape doesn't match the passed data or if memory can't be allocated. Since these are inputs, the memory being allocated is system memory rather than VRAM. In many cases, the caller can't really handle the error and panics. Empty and Zeros directly panic if they can't allocate memory. This makes things consistent by panicing for the first two cases, removing a fair amount of error handling code. This is also consistent with how Go typically handles these situations.
-
Jesse Gross authored
This provides granular information about the backend memory allocations required by the runner: - Per backend - Per layer - Weights, cache and graph - Allocation status This can be used for debugging and validating memory estimates.
-
Jesse Gross authored
GGML has a function to report the allocated size of a backend buffer. However, this returns 0 if we tried to allocate a buffer and it failed. For memory management purposes, it's important to know how much we were trying to allocate. This extends the API to report attempted sizes for all buffers and whether it succeeeded.
-
- 21 May, 2025 2 commits
-
-
Michael Yang authored
* feat: qwen3 dense * feat: qwen3moe * fix llama4 moe
-
Michael Yang authored
-
- 20 May, 2025 1 commit
-
-
Michael Yang authored
-
- 19 May, 2025 1 commit
-
-
Jesse Gross authored
Currently, when the backend is created, the tensors are loaded at the same time, which is a slow operation. This separates them to be two steps: - Create backend, including enumerating tensors and memory allocation - Loading tensor data This allows more flexibility in managing model loading.
-
- 15 May, 2025 1 commit
-
-
Michael Yang authored
* panic if trying to pad 4d * fix pixel values padding
-
- 14 May, 2025 2 commits
-
-
Bruce MacDonald authored
-
Michael Yang authored
-
- 12 May, 2025 2 commits
-
-
Jeffrey Morgan authored
-
Michael Yang authored
reduce prompt log to trace level
-
- 10 May, 2025 1 commit
-
-
Michael Yang authored
ml.Dump will preserve default values if not specified
-
- 06 May, 2025 1 commit
-
-
Daniel Hiltgen authored
* Move quantization logic to GGML via new backend This moves the model aware logic to Go code and calls GGMLs quantization code for model creation. * Remove "add model quantizations" This is no longer needed now that quantization is implemented in Go+GGML code directly.
-
- 05 May, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 02 May, 2025 3 commits
-
-
Jesse Gross authored
Successfully completing processing with an errgroup cancels the associated context. However, we also have a goroutine that is checking for cancelation of the context. As a result, there is a race where the goroutine can pick up the cancelation and report an error, replacing the sucessful error message. To avoid that, this replaces the goroutine with a cancelation check when we are reading files. This also has the advantage of stopping all reads relatively quickly on error and also ensuring that there are no outstanding I/O operations when we return in this case. The downside is that if a file read blocks forever (for example, over the network) then cancelation of the context effectively won't be honored. However, this is also true for other smaller files we read and the tensors are read in small chunks (128K), so it's consistent and better on balance overall.
-
Jesse Gross authored
Worst case graph preallocation was disabled by a27462b7 "ollamarunner: Temporarily disable worst case graph preallocation" since it caused crashes with large batches when not using the GPU. This backports upstream llama.cpp commit f057808 "ggml: Don't assert fail when tensor data changes (#13222)", which fixes the underlying bug and allows reverting the previous workaround.
-
Jeffrey Morgan authored
-
- 30 Apr, 2025 1 commit
-
-
Daniel Hiltgen authored
Users may have other incompatible GGML installs on their systems. This will prevent us from trying to load them from the path.
-
- 25 Apr, 2025 2 commits
-
-
Michael Yang authored
-
Jeffrey Morgan authored
-
- 18 Apr, 2025 1 commit
-
-
Michael Yang authored
-
- 17 Apr, 2025 1 commit
-
-
Jeffrey Morgan authored
-