"docs_zh_CN/conf.py" did not exist on "e127ef28dbaa53b52529e83cc4de2452132f5aca"
- 11 Dec, 2025 1 commit
-
-
EasonLin authored
-
- 08 Dec, 2025 1 commit
-
-
nicole pardal authored
This PR consolidates all embedding prompt-length checking, truncation, and prompt token counting into the runner to ensure a single source of truth.
-
- 05 Dec, 2025 1 commit
-
-
Sos Pogosyan authored
fix(api): correct Content-Type header for /api/chat and /api/generate when using cloud models (#13279) --------- Co-authored-by:
Pogosyan Sos <sos_pogosyan@MacBook-Pro-Sos.local> Co-authored-by:
Patrick Devine <patrick@infrahq.com>
-
- 20 Nov, 2025 1 commit
-
-
Grace authored
-
- 11 Nov, 2025 1 commit
-
-
Baptiste Jamin authored
Adds logprobs support to Ollama's API including support for Ollama's OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter in the API, Ollama will return the log probabilities for each token generated. 'top_logprobs', an integer value can also be specified up to the value 20. When specified, the API will also provide the number of most likely tokens to return at each token position Co-authored-by:Baptiste Jamin <baptiste@crisp.chat>
-
- 05 Nov, 2025 1 commit
-
-
Grace authored
* routes/types: add tool call id --------- Co-authored-by:ParthSareen <parth.sareen@ollama.com>
-
- 29 Oct, 2025 1 commit
-
-
Michael Yang authored
-
- 28 Oct, 2025 1 commit
-
-
Patrick Devine authored
This reverts commit 5d347f6d.
-
- 27 Oct, 2025 1 commit
-
-
nicole pardal authored
Currently, checking the length of prompts for embeddings to ensure they fit in the context window (and possible truncation) occurs in two places - the Ollama server and runner. This can lead to inconsistencies in both the checks and reported number of tokens processed. Since we have to do this processing in the runner, this consolidates all of the logic there.
-
- 25 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 22 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 16 Oct, 2025 1 commit
-
-
Jeffrey Morgan authored
Adds a temporary global flag to renderers that causes renderers to always render images as [img]. In a follow up change, we will consider making this the default, and this flag could eventually be removed
-
- 11 Oct, 2025 2 commits
-
-
Jeffrey Morgan authored
-
Devon Rifkin authored
Made it so when api/generate builds up a message array and generates the prompt it now goes through the same function as `api/chat` for consistency. This is where we hook the optional built-in renderers to bypass templates, which was missing for `api/generate` before this change. Closes: #12578
-
- 10 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 09 Oct, 2025 3 commits
-
-
Parth Sareen authored
-
Jeffrey Morgan authored
This reverts commit 6a62b894.
-
Jeffrey Morgan authored
-
- 08 Oct, 2025 1 commit
-
-
Patrick Devine authored
-
- 05 Oct, 2025 1 commit
-
-
Devon Rifkin authored
This makes the core openai compat layer independent of the middleware that adapts it to our particular gin routes
-
- 01 Oct, 2025 2 commits
-
-
Daniel Hiltgen authored
This revamps how we discover GPUs in the system by leveraging the Ollama runner. This should eliminate inconsistency between our GPU discovery and the runners capabilities at runtime, particularly for cases where we try to filter out unsupported GPUs. Now the runner does that implicitly based on the actual device list. In some cases free VRAM reporting can be unreliable which can leaad to scheduling mistakes, so this also includes a patch to leverage more reliable VRAM reporting libraries if available. Automatic workarounds have been removed as only one GPU leveraged this, which is now documented. This GPU will soon fall off the support matrix with the next ROCm bump. Additional cleanup of the scheduler and discovery packages can be done in the future once we have switched on the new memory management code, and removed support for the llama runner.
-
Michael Yang authored
this reference to keep alive was missed in #12041 so chat has a diffferent behaviour than generate
-
- 23 Sep, 2025 1 commit
-
-
Patrick Devine authored
* auth: fix problems with the ollama keypairs This change adds several fixes including: - reading in the pubkey files correctly - fixing the push unit test to create a keypair file in a temp directory - not return 500 errors for normal status error
-
- 18 Sep, 2025 3 commits
-
-
Jeffrey Morgan authored
-
Devon Rifkin authored
Now that we have a built-in parser abstraction, which was introduced in <https://github.com/ollama/ollama/pull/12248>, we can modify our harmony parser to match this and then get rid of nearly all of the harmony-specific logic in routes.go. We do have a small amount of code that turns the parser on by default if the architecture matches and no other built-in parser was provided. The built-in parser interface was modified in order to handle harmony's prefill and tool name translation requirements.
-
Michael Yang authored
-
- 17 Sep, 2025 2 commits
-
-
frob authored
-
Patrick Devine authored
-
- 15 Sep, 2025 3 commits
-
-
Michael Yang authored
* fix truncate * s/SentencePieceModel/SentencePiece/ * bert * wordpiece * refactor pooling * more tokenizers * normalize embeddings
-
Devon Rifkin authored
-
Devon Rifkin authored
The format qwen3-coder uses is relatively unique, both in rendering and in parsing. To implement parsing, I wrote a custom parser in similar style to harmony. For the rendering, I found that the logic would be much more difficult to follow in a template, so I introduced the concept of a built-in renderer that uses go code, rather than a template to generate prompts. I set us up for future built-in parsers and renderers by making it so they can be specified in a Modelfile like so: ``` RENDERER "qwen3-coder" PARSER "qwen3-coder" ``` These need to be provided explicitly because the architecture alone is not enough to understand what format the model expects to receive, and what format we expect it to output (e.g., qwen3-coder is `qwen3moe`, which includes other qwen3-family models as well) I haven't converted harmony to be one of these "built-ins" yet, since some of it is in flux with the changes @ParthSareen has been making to move harmony to the runner. It is likely that many other built-ins will need to move to the runner as well, but I'm able to slightly defer that decision since qwen3-coder doesn't have thinking (and therefore doesn't need to be in the runner to make structured outputs work). I expect to unify harmony with this approach very soon. Whether a particular model supports tools or thinking was previously inferred from templates, but without a template we now also use the parser itself to declare what it supports. If we have future models that re-use the same parsing format, but have different capabilities, we'll want to parameterize them and give them different names to be specified as a `PARSER`. Misc changes: - I worked on the renderer by diffing outputs from the reference implementation and ours. To make it easier to do this, I extended <https://github.com/ollama/ollama/pull/11875> to also support returning the prompt via the openai compat layer
-
- 12 Sep, 2025 2 commits
-
- 11 Sep, 2025 1 commit
-
-
Michael Yang authored
* feat: add field to truncate embeddings * add openai embeddings for dimensions
-
- 10 Sep, 2025 1 commit
-
-
Parth Sareen authored
-
- 08 Sep, 2025 1 commit
-
-
Parth Sareen authored
-
- 27 Aug, 2025 1 commit
-
-
Michael Yang authored
-
- 22 Aug, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 21 Aug, 2025 1 commit
-
-
Parth Sareen authored
-
- 18 Aug, 2025 1 commit
-
-
Devon Rifkin authored
In <https://github.com/ollama/ollama/issues/11704#issuecomment-3177380197> I noticed that hyphens in function names could possibly cause the model to become confused. Later in that issue I found other explanations, but at a minimum tool names with spaces in them are confusing to the model because of the prompt format. In this change I create a mapper that converts arbitrary tool names into valid typescript identifiers. It's a little overly strict in that it doesn't allow all unicode characters that might be valid in ts identifiers, but it's still very permissive. Since mappings aren't reversible, we must temporarily store this mapping in order to unmap it if the model comes back with a call. We also handle the case where multiple mappings collide into the same mapping and append a counter to the end to make them unique
-