- 18 Jun, 2025 3 commits
-
-
Jeffrey Morgan authored
This reverts commit 6b04cad7.
-
曹家巧 authored
-
Jeffrey Morgan authored
-
- 17 Jun, 2025 1 commit
-
-
Jeffrey Morgan authored
Fixes issue where tool calls that don't expect any parameters were not being parsed. This also fixes two additional issues: one where 2+ tool calls would not be correctly parsed, and cases where tool calls with invalid parameters would still get parsed
-
- 16 Jun, 2025 3 commits
-
-
Jeffrey Morgan authored
-
Michael Yang authored
* ggml: test write gguf order * ggml: fix write tensor order
-
NGC13009 authored
-
- 14 Jun, 2025 1 commit
-
-
Phil authored
-
- 12 Jun, 2025 2 commits
-
-
Jeffrey Morgan authored
-
Michael Yang authored
* incremental gguf parser * gguf: update test to not rely on gguf on disc * re-use existing create gguf * read capabilities from gguf kv * kv exists * update tests * s/doneFunc/successFunc/g * new buffered reader --------- Co-authored-by:Bruce MacDonald <brucewmacdonald@gmail.com>
-
- 11 Jun, 2025 3 commits
-
-
Michael Yang authored
The current splitDim function only operates on tensors that are split evenly which isn't always the case, e.g. a QKV tensor. This change allows the function to be used for arbitrary splits
-
Michael Yang authored
if tokenizer.json is already copied, skip tokenizer.model
-
Michael Yang authored
while nn.Linear.Forward isn't applicable for sparse MLP, it's still a nice container for the tensors
-
- 10 Jun, 2025 3 commits
-
-
Attogram Project authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 09 Jun, 2025 1 commit
-
-
Daniel Hiltgen authored
When a user elects to keep the existing app, the new Ollama is named `Ollama 2.app` This fixes the app startup flow to handle this naming pattern.
-
- 08 Jun, 2025 1 commit
-
-
Daniel Hiltgen authored
Give the desktop app a hint to start fast.
-
- 07 Jun, 2025 2 commits
-
-
Krzysztof Jeziorny authored
-
Jeffrey Morgan authored
This reverts commit 09430011.
-
- 06 Jun, 2025 4 commits
-
-
Daniel Hiltgen authored
When starting the app in the background, start it hidden.
-
Daniel Hiltgen authored
Fix an array out of bounds crash
-
Devon Rifkin authored
move thinking logic into its own package
-
Hunter Wittenborn authored
-
- 05 Jun, 2025 2 commits
-
-
Devon Rifkin authored
export ThinkingParser
-
Devon Rifkin authored
-
- 04 Jun, 2025 1 commit
-
-
JasonHonKL authored
-
- 31 May, 2025 1 commit
-
-
HardCodeDev authored
-
- 30 May, 2025 1 commit
-
-
Parth Sareen authored
-
- 29 May, 2025 3 commits
-
-
Jesse Gross authored
This enables matching up devices and information reported by the backend with system management libraries such as nvml to get accurate free memory reporting.
-
Jesse Gross authored
"POST predict" basically means that the runner has crashed, which can have many reasons. However, many people think this is a specific error and either report only this message or group together unrelated bugs. This replaces it with a more friendly and helpful message.
-
Devon Rifkin authored
- Both `/api/generate` and `/api/chat` now accept a `"think"` option that allows specifying whether thinking mode should be on or not - Templates get passed this new option so, e.g., qwen3's template can put `/think` or `/no_think` in the system prompt depending on the value of the setting - Models' thinking support is inferred by inspecting model templates. The prefix and suffix the parser uses to identify thinking support is also automatically inferred from templates - Thinking control & parsing is opt-in via the API to prevent breaking existing API consumers. If the `"think"` option is not specified, the behavior is unchanged from previous versions of ollama - Add parsing for thinking blocks in both streaming/non-streaming mode in both `/generate` and `/chat` - Update the CLI to make use of these changes. Users can pass `--think` or `--think=false` to control thinking, or during an interactive session they can use the commands `/set think` or `/set nothink` - A `--hidethinking` option has also been added to the CLI. This makes it easy to use thinking in scripting scenarios like `ollama run qwen3 --think --hidethinking "my question here"` where you just want to see the answer but still want the benefits of thinking models
-
- 27 May, 2025 5 commits
-
-
Patrick Devine authored
If OLLAMA_AUTH is set, sign each request w/ a timestamp and pass the signature in the token header
-
Jesse Gross authored
Computing an attention mask for a large context and max batch is expensive - over 100ms. Models like Gemma3 that have multiple types of caches and custom attention masks need to do this 4 times, so this adds approximately 500ms to startup time when using 128k context When we are reserving the worst case graph, we don't need the mask, only its shape, so we can skip this.
-
Kyle Steere authored
Signed-off-by:Kyle Steere <kyle.steere@chainguard.dev>
-
Parth Sareen authored
-
Parth Sareen authored
-
- 26 May, 2025 1 commit
-
-
RAPID ARCHITECT authored
-
- 24 May, 2025 2 commits
-
-
Min Yoo authored
This commit updates the README to include macLlama within the community integrations section. macLlama is a native macOS application built for lightweight and efficient LLM interaction. Key features include: * **Lightweight & Native:** Designed to be resource-friendly and perform optimally on macOS. * **Chat-like Interface:** Provides a user-friendly, conversational interface. * **Multiple Window Support:** Allows users to manage multiple conversations simultaneously. The primary goal of macLlama is to offer a simple and easy-to-run LLM experience on macOS.
-
Daniel Hiltgen authored
-