"vscode:/vscode.git/clone" did not exist on "ec23f635d2e09bb9ce3cc23b7be6322cef6dec7a"
- 30 Oct, 2025 6 commits
-
-
Michael Yang authored
* ml(ggml): mrope * interleave mrope
-
Michael Yang authored
-
Michael Yang authored
this change fixes two bugs with `ollama rm`: 1. before a model is removed, it will first be stopped. this only happens for the first argument and skipped for all other models 2. models are unloaded indiscriminately. this errors for cloud models and should be omitted
-
Michael Yang authored
this change fixes images with an alpha channel by overlaying the image onto a white background
-
Michael Yang authored
* mulmat * permute
-
Athiban Sharon authored
Fixed broken docs links
-
- 29 Oct, 2025 8 commits
-
-
Grace authored
Eats extra whitespace at the end/beginning of content
-
Daniel Hiltgen authored
this should reduce zombies during integration runs
-
Patrick Devine authored
-
Michael Yang authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Michael Yang authored
-
- 28 Oct, 2025 12 commits
-
-
Patrick Devine authored
-
Parth Sareen authored
-
Daniel Hiltgen authored
* Fix vulkan PCI ID and ID handling Intel GPUs may not report PCI IDs which was leading to incorrect overlap detection. Switch to using the existing PCI IDs, however AMD GPUs claim not to report PCI IDs, but actually do, so try anyway, as this is required for ADLX to find the GPUs on Windows. Numeric IDs lead to scheduling problems, so this also switches Vulkan to use UUID based IDs. The GPU discovery patches have been squashed into a single patch to simplify future rebases. * review comments
-
Patrick Devine authored
This reverts commit 5d347f6d.
-
Parth Sareen authored
-
Parth Sareen authored
-
Parth Sareen authored
This reverts commit 934dd9e1.
-
Parth Sareen authored
-
Michael Yang authored
-
nicole pardal authored
-
Devon Rifkin authored
create: inherit FROM model's renderer/parser
-
Michael Yang authored
-
- 27 Oct, 2025 2 commits
-
-
Devon Rifkin authored
On main, the `RENDERER` and `PARSER` fields from the `Modelfile` don't get propagated to a new model created with a `req.From` parameter. This is easily triggered via `ollama run qwen3-coder`, then running some save command like `/save qwen3-coder-custom`. Added a regression test for this, and then open the config for the "from" model in order to use its renderer/parser as a default for the new model. This will fix the CLI and also API-based creates. Fixes: https://github.com/ollama/ollama/issues/12792
-
nicole pardal authored
Currently, checking the length of prompts for embeddings to ensure they fit in the context window (and possible truncation) occurs in two places - the Ollama server and runner. This can lead to inconsistencies in both the checks and reported number of tokens processed. Since we have to do this processing in the runner, this consolidates all of the logic there.
-
- 25 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 23 Oct, 2025 4 commits
-
-
Jesse Gross authored
If we create a memory layout that should fit based on report free VRAM but allocation still fails, we start applying a backoff. This reduces free VRAM by an exponential percentage (1%, 2%, 4%...). However, the points chosen tend to be too dense at the beginning and too sparse at the end. Therefore, this switches to an incremental backoff (10%, 20%, 30%...).
-
Vinh Nguyen authored
-
Daniel Hiltgen authored
* DRY out the runner lifecycle code Now that discovery uses the runners as well, this unifies the runner spawning code into a single place. This also unifies GPU discovery types with the newer ml.DeviceInfo * win: make incremental builds better Place build artifacts in discrete directories so incremental builds don't have to start fresh * Adjust sort order to consider iGPUs * handle cpu inference oom scenarios * review comments
-
Jesse Gross authored
We currently short circuit generation of the cache mask and just generate an empty tensor of the correct size. However, in some cases, this can also skip a cast operation. This can result in the worst case graph being not fully worst case. We don't actually need the fast path for mask generation, so it's better to just use the normal code path.
-
- 22 Oct, 2025 4 commits
-
-
Jesse Gross authored
Currently, we only record the time for the last batch when processing the prompt. This results in unrealistically high numbers for the old llama runner. Before: total duration: 31.273112939s load duration: 4.97054657s prompt eval count: 32768 token(s) prompt eval duration: 235.137439ms prompt eval rate: 139356.80 tokens/s eval count: 1873 token(s) eval duration: 18.173182374s eval rate: 103.06 tokens/s After: total duration: 30.024798033s load duration: 4.758588663s prompt eval count: 32768 token(s) prompt eval duration: 7.779621548s prompt eval rate: 4212.03 tokens/s eval count: 1769 token(s) eval duration: 17.148014223s eval rate: 103.16 tokens/s
-
frob authored
-
nicole pardal authored
-
Patrick Devine authored
-
- 20 Oct, 2025 3 commits
-
-
Jeffrey Morgan authored
-
Michael Yang authored
-
Jeffrey Morgan authored
-