- 23 May, 2025 2 commits
-
-
Parth Sareen authored
-
Parth Sareen authored
-
- 22 May, 2025 7 commits
-
-
Jesse Gross authored
FromFloatSlice and FromIntSlice return an error if the shape doesn't match the passed data or if memory can't be allocated. Since these are inputs, the memory being allocated is system memory rather than VRAM. In many cases, the caller can't really handle the error and panics. Empty and Zeros directly panic if they can't allocate memory. This makes things consistent by panicing for the first two cases, removing a fair amount of error handling code. This is also consistent with how Go typically handles these situations.
-
Jesse Gross authored
This provides granular information about the backend memory allocations required by the runner: - Per backend - Per layer - Weights, cache and graph - Allocation status This can be used for debugging and validating memory estimates.
-
Jesse Gross authored
GGML has a function to report the allocated size of a backend buffer. However, this returns 0 if we tried to allocate a buffer and it failed. For memory management purposes, it's important to know how much we were trying to allocate. This extends the API to report attempted sizes for all buffers and whether it succeeeded.
-
Daniel Hiltgen authored
When the same model is being reloaded rapidly with client connections being canceled before the model finishes loading, the queued unload event could cause a leak of runners by deleting a different runner from the loaded list.
-
Michael Yang authored
* fix mllama convert - transform attn_gate and ffn_gate - swap attention heads for vision models * fix mllama the mlp gate which was applied in the wrong place
-
Bruce MacDonald authored
Fall back to alternative quantization types when a tensor's dimensions aren't divisible by the block size required for the original desired quantization type. If retried quantization types fail, the system ultimately falls back to F16 (half-precision floating point) which has a block size of 1 and can handle any tensor dimension.
-
Daniel Hiltgen authored
Replace the older llava model with qwen2.5 for vision tests Skip split-batch test on small VRAM systems to avoid excessive test time
-
- 21 May, 2025 7 commits
-
-
Michael Yang authored
* remove support for multiple ggufs in a single file this was an attempt to make it easier to import multimodal models into ollama. this was rarely used and error prone so remove it * fix: create fused model from blob
-
Daniel Hiltgen authored
Give the user a helpful error instead of showing connection refused errors.
-
Michael Yang authored
-
Michael Yang authored
* feat: qwen3 dense * feat: qwen3moe * fix llama4 moe
-
Michael Yang authored
this fixes an issue introduced in #10788
-
Michael Yang authored
-
Michael Yang authored
setting samebatch on the vision start token is problematic because it will be shared with other inputs that also use images. this will cause the input to be cached and the runner will not see SameBatch. SameBatch will also be incorrect since it may be for a different image. assigning samebatch to the input tokens resolves this by ensure it's assigned correctly to inputs corresponding to the image. not setting same batch correctly may cause panics during inference since images are no longer guaranteed to be in the same batch.
-
- 20 May, 2025 2 commits
-
-
Michael Yang authored
-
DarkCaster authored
-
- 19 May, 2025 6 commits
-
-
Michael Yang authored
* fix llama model * fix mistral3.1 model do not set default vision layers
-
Jesse Gross authored
This is a partial revert of 0478d440 "Fixed over vram allcation dure to small initial layer sizes." Previously we used the size of the first layer as an extra reserved amount of space to buffer our memory estimates. The above commit changed this to use the largest layer. However, this had performance impacts on more models than the original commit was trying to fix. There is just a heuristic without an ideal solution so this goes back to the historic behavior. Fixes: #10765, #10756, #10752, #10726
-
Daniel Hiltgen authored
-
Jesse Gross authored
Currently, when the backend is created, the tensors are loaded at the same time, which is a slow operation. This separates them to be two steps: - Create backend, including enumerating tensors and memory allocation - Loading tensor data This allows more flexibility in managing model loading.
-
Jesse Gross authored
The Llama engine always places vision projectors on the first GPU if one exists. However, the Ollama engine groups it with the output layer, which means the projector is only offloaded if all other layers are offloaded. The memory estimation code always assumes the former layout - this changes it to use the correct layout based on the engine. This addresses two impacts of the current behavior: - In multi-GPU setups, we can crash with OOM errors when we try to allocate memory on a full GPU while another still has space. - If the vision projector is large, it may prevent us from offloading anything when we could have fit some of the text layers.
-
Jesse Gross authored
In some cases, if we fail to assign a piece of the model to a GPU then we lose track of this data. Although it doesn't change the memory allocation, it does affect the total size of the model reported by tools such as ollama ps (and also the percent offloaded). This makes it look like setting num_gpu isn't reflected in ollama ps, which isn't true but the offloading percent may appear to not change. Spreading the model across more GPUs will continue to impact the reported total size of the model.
-
- 18 May, 2025 1 commit
-
-
Ronald Wilson authored
This PR adds Tiny Notepad, a lightweight, notepad-like interface to chat with local LLMs via Ollama. - It’s designed as a simple, distraction-free alternative. - The app supports basic note-taking, timestamped logs, and model parameter controls. - Built with Tkinter, it runs entirely offline and available via PyPI. Aims to provide a lightweight easy to run and install interface for ollama.
-
- 16 May, 2025 1 commit
-
-
Michael Yang authored
* get eos_token_id from generation_config.json * refactor * include both ids and strings in trace * comments * remove special case for gemma3 special vocab (#10743)
-
- 15 May, 2025 7 commits
-
-
Daniel Hiltgen authored
-
Bruce MacDonald authored
When a piece of information has been truncated in the show output an ellipses to indicate that more data has not been displayed
-
Jesse Gross authored
We currently preallocate compute graph memory for the worst case batch of text tokens. This adds support for doing the same for images. Note that image models are more complicated than text models in how they process their inputs so there may be cases where this approach isn't completely generic for all models. It covers all currently supported models though.
-
Jesse Gross authored
For some multimodal models (such as gemma3), we create a single graph that generates the image embedding and then use this in the text model. The embedding tensor is completely opaque to the runner. However, this doesn't work if we need to use the embedding in multiple batches. This can arise if the embedding is larger than the batch size. In these cases (as with llama4), we would like to create views that are more appropriately sized. However, if we do this then the original source tensor is used in multiple graphs, which isn't allowed. To avoid that problem, models with this pattern compute the embedding tensor on first use and recreate the individual views. There is no longer a single vision and text graph. This codifies the pattern of separating vision and text graphs. The logic of computing tensors on demand is moved to the runner, so models no longer have to worry about this. It also gives the runner visibility into the multimodal tensors, which is important for memory management.
-
Jesse Gross authored
When we restore a sequence from the cache, we split the prompt into the already used tokens (stored in the cache) and new tokens that need to be processed. Currently, the references to the used tokens are coming from the stored previous sequence. However, even though we know that the used tokens are semantically equivalent to the prefix of the prompt, tokens can contain pointers which are no longer valid. As a result, it is better to get the used tokens from the prompt, which has currently valid pointers. This doesn't currently have any impact because it isn't possible to reuse the pointers (which are tensors) anyways. However, it becomes an issue once we can.
-
Michael Yang authored
* panic if trying to pad 4d * fix pixel values padding
-
Michael Yang authored
cross attention Q and K projections needs to have their heads swapped, similar to non-cross attention Q and K tensors
-
- 14 May, 2025 4 commits
-
-
Bruce MacDonald authored
-
Daniel Hiltgen authored
Older clients assumed the digest was at least 19 characters long so increase the size of the dummy digest to avoid array out of bounds crashes.
-
Bruce MacDonald authored
-
Michael Yang authored
-
- 13 May, 2025 3 commits
-
-
tej authored
Co-authored-by:
Tej Kiran <kiran.tej@amd.com> Co-authored-by:
Michael Yang <mxyng@pm.me> Co-authored-by:
Tej Kiran <itej89@gmailcom>
-
Parth Sareen authored
-
Jeffrey Morgan authored
-