- 18 Dec, 2025 3 commits
-
-
Jeffrey Morgan authored
-
Grace authored
-
- 16 Dec, 2025 1 commit
-
-
Bruce MacDonald authored
Refactored the ConfigV2 and RootFS types from server/images.go to a new types/model/config.go file under the model package. Updated all references to use model.ConfigV2 and model.RootFS. This allows for use in other projects without worrying about compiling the c code in the llama package.
-
- 11 Dec, 2025 2 commits
-
-
Devon Rifkin authored
Only supporting the stateless part of the API. Doc updates to come once this is shipped. Closes: #9659
-
EasonLin authored
-
- 08 Dec, 2025 1 commit
-
-
nicole pardal authored
This PR consolidates all embedding prompt-length checking, truncation, and prompt token counting into the runner to ensure a single source of truth.
-
- 05 Dec, 2025 1 commit
-
-
Sos Pogosyan authored
fix(api): correct Content-Type header for /api/chat and /api/generate when using cloud models (#13279) --------- Co-authored-by:
Pogosyan Sos <sos_pogosyan@MacBook-Pro-Sos.local> Co-authored-by:
Patrick Devine <patrick@infrahq.com>
-
- 20 Nov, 2025 1 commit
-
-
Grace authored
-
- 18 Nov, 2025 1 commit
-
-
Michael Yang authored
* migrate to golangci-lint v2 * copyloopvar
-
- 16 Nov, 2025 2 commits
-
-
omahs authored
-
pierwill authored
Co-authored-by:pierwill <pierwill@users.noreply.github.com>
-
- 13 Nov, 2025 1 commit
-
-
Parth Sareen authored
-
- 11 Nov, 2025 2 commits
-
-
Jesse Gross authored
Currently for both the old and new engines, there is code to calculate how much memory is required for a model and lay out the layers onto GPUs. This reuses the new engine's lay out code for the old engine as well, bringing them closer together. The old engine continues to use its current method of estimating required memory. This reduces maintainence effort and improves consistency, as new features only need to be implemented in one place. The newer code is also more accurate, especially with multiple GPUs.
-
Baptiste Jamin authored
Adds logprobs support to Ollama's API including support for Ollama's OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter in the API, Ollama will return the log probabilities for each token generated. 'top_logprobs', an integer value can also be specified up to the value 20. When specified, the API will also provide the number of most likely tokens to return at each token position Co-authored-by:Baptiste Jamin <baptiste@crisp.chat>
-
- 06 Nov, 2025 2 commits
-
-
breatn authored
-
Jeffrey Morgan authored
-
- 05 Nov, 2025 2 commits
-
-
Daniel Hiltgen authored
-
Grace authored
* routes/types: add tool call id --------- Co-authored-by:ParthSareen <parth.sareen@ollama.com>
-
- 04 Nov, 2025 1 commit
-
-
Daniel Hiltgen authored
* app: add code for macOS and Windows apps under 'app' * app: add readme * app: windows and linux only for now * ci: fix ui CI validation --------- Co-authored-by:jmorganca <jmorganca@gmail.com>
-
- 29 Oct, 2025 1 commit
-
-
Michael Yang authored
-
- 28 Oct, 2025 1 commit
-
-
Patrick Devine authored
This reverts commit 5d347f6d.
-
- 27 Oct, 2025 2 commits
-
-
Devon Rifkin authored
On main, the `RENDERER` and `PARSER` fields from the `Modelfile` don't get propagated to a new model created with a `req.From` parameter. This is easily triggered via `ollama run qwen3-coder`, then running some save command like `/save qwen3-coder-custom`. Added a regression test for this, and then open the config for the "from" model in order to use its renderer/parser as a default for the new model. This will fix the CLI and also API-based creates. Fixes: https://github.com/ollama/ollama/issues/12792
-
nicole pardal authored
Currently, checking the length of prompts for embeddings to ensure they fit in the context window (and possible truncation) occurs in two places - the Ollama server and runner. This can lead to inconsistencies in both the checks and reported number of tokens processed. Since we have to do this processing in the runner, this consolidates all of the logic there.
-
- 25 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 23 Oct, 2025 1 commit
-
-
Daniel Hiltgen authored
* DRY out the runner lifecycle code Now that discovery uses the runners as well, this unifies the runner spawning code into a single place. This also unifies GPU discovery types with the newer ml.DeviceInfo * win: make incremental builds better Place build artifacts in discrete directories so incremental builds don't have to start fresh * Adjust sort order to consider iGPUs * handle cpu inference oom scenarios * review comments
-
- 22 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 20 Oct, 2025 1 commit
-
-
Michael Yang authored
-
- 17 Oct, 2025 1 commit
-
-
Daniel Hiltgen authored
* test: harden scheduler tests This removes reschedDelay which was stale code, and adds a new configurable timeout for the waitForVRAMRecovery so tests can now set the timeout to be very short to avoid the scheduler getting stuck and hitting a test timeout. * test: tune tests for partial loads Give stress tests more time when the model is split between CPU/GPU
-
- 16 Oct, 2025 1 commit
-
-
Jeffrey Morgan authored
Adds a temporary global flag to renderers that causes renderers to always render images as [img]. In a follow up change, we will consider making this the default, and this flag could eventually be removed
-
- 14 Oct, 2025 1 commit
-
-
Devon Rifkin authored
-
- 13 Oct, 2025 1 commit
-
-
Grace authored
* working (other than tool call is the incorrect order) for tool calls and tools * Tests work, other than image tags (tests do not go through server) and tools (not in the correct order, but contents are the same) * testing for qwen3vl parser - toolparser is working * made changes to JSON tool parser, wraps the TollCallFunction with a TollCall object * Working parser for thinking models - assumes state of thinking, emits unambiguous content in thinking, does not call tool call in thinking * changed the parser to start with collecting content * thinking prefill * add hasThinkingSupport parameter to parser * qwen3-vl -> qwen3-vl-instruct for renderer/parser * Add hasThinkingSupport=false to QwenVLParser --------- Co-authored-by:Devon Rifkin <drifkin@drifkin.net>
-
- 11 Oct, 2025 2 commits
-
-
Jeffrey Morgan authored
-
Devon Rifkin authored
Made it so when api/generate builds up a message array and generates the prompt it now goes through the same function as `api/chat` for consistency. This is where we hook the optional built-in renderers to bypass templates, which was missing for `api/generate` before this change. Closes: #12578
-
- 10 Oct, 2025 2 commits
-
-
Daniel Hiltgen authored
* implement nvml for linux * Improve scheduler logging when VRAM doesn't recover
-
Patrick Devine authored
-
- 09 Oct, 2025 4 commits
-
-
Daniel Hiltgen authored
* logs: quiet down context canceled on completion If the client closes the connection before Completion finishes, we were logging at error level implying the runner crashed which was misleading. time=2025-10-08T22:59:20.566-07:00 level=ERROR source=server.go:1490 msg="post predict" error="Post \"http://127.0.0.1:57736/completion\": context canceled" * quiet down scheduler log error on expected case Since we don't hold the lock while performing memory load calculations, other runners can unload in parallel, so finding no runner to unload is a valid scenario which we shouldn't log at error level.
-
Parth Sareen authored
-
Jeffrey Morgan authored
This reverts commit 6a62b894.
-
Jeffrey Morgan authored
-