- 27 Jan, 2026 2 commits
- 21 Jan, 2026 1 commit
-
-
Your Name authored
-
- 05 Aug, 2025 2 commits
-
-
Jesse Gross authored
KV cache quantization has a dependency on the flash attention kernel. We currently cannot use flash attention with gpt-oss as it requires additional operations. The model definition does not call flash attention, so it works regardless of the setting but the cache will pick up the quantization type. This updates the flash attention setting earlier in the loading flow so that all downstream settings are also set correctly. Fixes: #11671
-
Michael Yang authored
* bf16 * tests * gpt-oss * enable gptoss for engine * rough estimate * convert to mxfp4 * handle safetensors U8 * clamp glu/linear * update tokenizer * MXFP4 support This implements the Open Compute Microscaling (MX) FP4 format as a tensor type with backend implementations focusing on mulmat and mulmatid on CPU, CUDA, and Metal. * Unit tests for MXFP4 support This exercises various operations and shapes on both CPU and GPU (if detected on the system) * cuda graph * unit test adjustments * cuda: optimize memory access Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4 * mac: fix crash on old macos versions cblas_sgemm is only supported on v13.3 and up, however bf16 is only supported on v14+ so we were falling back to ggml-blas and crashing on bf16 tensors. Checking for the function being null seems to be the simplest way to condittionally avoid registering the backend. * server: Minimum context length for gptoss This model requires a minimum context length of 8192 to function effectively. Users can set higher values through all normal mechanisms but lower values will be silently reset. * ggml: Multiply by numParallel for gptoss sliding window When computing the graph size estimate, the context size is already multiplied by numParallel so estimates reflect that. However, since sliding window models use a smaller, fixed context size, they need to manually take numParallel into account. * gpt-oss integration includes harmony parser and thinking levels, etc. * fix sync * fix tests * fix lint --------- Co-authored-by:
Daniel Hiltgen <daniel@ollama.com> Co-authored-by:
Jesse Gross <jesse@ollama.com> Co-authored-by:
Devon Rifkin <drifkin@drifkin.net>
-
- 04 Aug, 2025 1 commit
-
-
Jesse Gross authored
There is a bug when using sliding window attention where we run out of KV cache slots. This is likely due to not correctly removing all of the entries as they slide out of range. This adds additional logging when this occurs to track down the source. Bug #10127
-
- 31 Jul, 2025 1 commit
-
-
Jesse Gross authored
Models that use sliding window attention can only resume a sequence from the cache if it falls within the saved windows. This works well if the next message picks up where the old one left off. However, it generally prevents a partial prefix match unless the entire conversation falls within the sliding window. This can be a problem with reasoning models where the traces are supposed to be removed from future messages, forcing the entire history to be re-evaluated. This change allows models to specify that a larger amount of the history be retained in memory, to allow more partial resumption. It still respects the window that the model was trained on for token generation.
-
- 30 Jul, 2025 3 commits
-
-
Sajal Kulshreshtha authored
-
Daniel Hiltgen authored
This reverts commit 9d071e6089319b37acf62bb739e3430dcb2ac0c3.
-
Daniel Hiltgen authored
Support for bf16 was added in MacOS v14+ and attempting to enable on older versions causes runtime failures.
-
- 29 Jul, 2025 3 commits
-
-
Daniel Hiltgen authored
-
Oliver Simons authored
* Enable CUDA Graphs for gemma3n. Similar to https://github.com/ggml-org/llama.cpp/pull/14741, though ollama has a slightly different model graph than llama.cpp which requires different workaround checks. * Remove residual check by reshaping differently in gemma3n model This should make the heuristics more robust
-
Jesse Gross authored
When we context shift, we delete half the context and apply RoPE with an offset to the other half. We used to RoPE across the entire context in a single pass with a zero offset for the deleted section. With the change to shifting in batches, we can skip any batches where all of the offsets would be zero. This typically reduces the number of operations by half.
-
- 28 Jul, 2025 1 commit
-
-
Yoshi authored
-
- 27 Jul, 2025 1 commit
-
-
Mayan EDMS authored
-
- 25 Jul, 2025 2 commits
-
-
Jesse Gross authored
Currently, when we need to do a shift on the cache, it is one RoPE operation on the entire size of the cache (per layer). In some cases, this can create a compute graph that is larger than the forward pass since the forward pass is working in batches. Since we don't consider shifting in our memory estimates, it's possible for this to cause a crash if we run out of memory. By limiting the size of the RoPE calls to batch size chunks, we ensure that the shift will never exceed the size of the forward pass, since the forward pass will also contain a RoPE of the same size. This does not have a sigificant impact on performance since RoPE is a math operation that is mostly proportional to the size of its inputs. In theory defrag could have the same issue since it also creates a compute graph outside of the forward pass, however, since it is only copies, it does not require any working space.
-
Ruyut authored
-
- 24 Jul, 2025 2 commits
-
-
Patrick Devine authored
-
Jeffrey Morgan authored
-
- 23 Jul, 2025 2 commits
-
-
minxinyi authored
-
Michael Yang authored
-
- 22 Jul, 2025 2 commits
-
-
Patrick Devine authored
--------- Co-authored-by:Richard Lyons <frob@cloudstaff.com>
-
ycomiti authored
-
- 20 Jul, 2025 2 commits
-
-
Stefan Wärting authored
-
Jeffrey Morgan authored
Co-authored-by:frob <rick+github@frob.com.au>
-
- 19 Jul, 2025 1 commit
-
-
zmldndx authored
-
- 17 Jul, 2025 5 commits
-
-
Daniel Hiltgen authored
The macos-13 is x86, while macos-13-xlarge is arm64
-
frob authored
-
frob authored
Co-authored-by:Richard Lyons <frob@cloudstaff.com>
-
Haiyue Wang authored
-
Michael Yang authored
-
- 16 Jul, 2025 3 commits
-
-
Parth Sareen authored
-
Bruce MacDonald authored
StatusError was unreachable, the client always checked for error messages in the response body first, and the server always includes error messages with HTTP error status codes.
-
Marcelo Fornet authored
-
- 11 Jul, 2025 4 commits
-
-
Jesse Gross authored
Reporting params.NumGPULayers can be misleading because it is the requested number of layers, not the actual number that is loaded. While they are often the same, there are cases where they might mismatch, such as if the GPU backend is missing.
-
Jesse Gross authored
We're not currently using it, even in cases where we could. Disabling it improves generation performance by 10-30% with multiple GPUs.
-
Daniel Hiltgen authored
* Only load supported models on new engine Verify the model is supported before trying to load * int: testcase for all library models
- 09 Jul, 2025 1 commit
-
-
Jesse Gross authored
We don't get valid UUIDs for AMD GPUs on Windows, so the best option is to use the ordinal IDs. This brings us in line with what we currently do on the Ollama server - the only exception is AMD GPUs on Linux, which falls back to using ordinal IDs. The GGML implementation has no fallback but it doesn't appear to occur for any of the GPUs that we support. It's also possible that there are collisions between ordinal IDs for different libraries - however the only places where we use them are AMD on Windows and Metal on Mac, which can never occur on the same system.
-
- 08 Jul, 2025 1 commit
-
-
Daniel Hiltgen authored
also removes stale model dir instructions for windows
-