"torchvision/vscode:/vscode.git/clone" did not exist on "f1b4c7a6fd65479a096ed6ae44fb5e762af6c0f4"
- 05 Nov, 2025 11 commits
-
-
Eva Ho authored
-
Eva Ho authored
-
Daniel Hiltgen authored
-
nicole pardal authored
Co-authored-by:A-Akhil <akhilrahul70@gmail.com> This PR introduces a new ollama embed command that allows users to generate embeddings directly from the command line. Added ollama embed MODEL [TEXT...] command for generating text embeddings Supports both direct text arguments and stdin piping for scripted workflows Outputs embeddings as JSON arrays (one per line)
-
Daniel Hiltgen authored
The scheduler updates free VRAM based on current loaded models. This was mutating the persisted list of GPUs, and when coupled with the non-refreshing logic for Metal that lead to stale low VRAM reporting after unload. The fix is to make sure the GPU discovery always returns a copy so the schedulers GPU list is in fact ephemeral and doesn't leak any temporary adjustments back into the persistent list.
-
Patrick Devine authored
-
Daniel Hiltgen authored
The behavior change in 0.12.4 is the most likely the root cause of hangs some users are seeing. This reverts to the 0.12.3 code, with some added trace logging.
-
Youdon authored
-
Daniel Hiltgen authored
-
Grace authored
* routes/types: add tool call id --------- Co-authored-by:ParthSareen <parth.sareen@ollama.com>
-
Daniel Hiltgen authored
-
- 04 Nov, 2025 5 commits
-
-
Daniel Hiltgen authored
* discovery: only retry AMD GPUs CUDA and Vulkan don't crash on unsupported devices, so retry isn't necessary. This also refactors the code to shift the Library specific logic into the ml package. * review comments
-
virajwad authored
* PDH free memory skeleton * Add PDH printing * Add LUID support for Vulkan * wire luid from ggml-vulkan to mem-dxgi-pdh file * Fix to ggml-impl * Continue skeleton * Implemented ggml_dxgi_pdh_get_device_memory * fix comments * Fix - change value GB to bytes * add ifdefs to only support windows and not linux * modify error codes * Finished ggml_dxgi_pdh_init() function * completed ggml_dxgi_pdh_release() * Formatting changes, add static to functions * fix build errors * fix go build error * fix luid - now should match between dxgi and vulkan * Fix the free memory reporting (was using copy by value, change to reference) * keep only dxgi1_2.h * Modifications based on PR feedback * fix merge conflicts (2) and fix desc1.description printout * move dxgi + pdh api calls to before the vendor specific library calls * change from 3 samples to 1 sample for PDH * modify when old_mode is set * add fix for building MacOS * fix release and returns for other vendors * add patch file
-
Daniel Hiltgen authored
* app: add code for macOS and Windows apps under 'app' * app: add readme * app: windows and linux only for now * ci: fix ui CI validation --------- Co-authored-by:jmorganca <jmorganca@gmail.com>
-
Daniel Hiltgen authored
Also adjusts the vulkan windows build pattern to match recent changes in other backends so incremental builds are faster.
-
Jesse Gross authored
The initial implementation of qwen3-vl:235b exceeded the maximum graph size based on the number of tensors. Although this was later fixed through the use of the mrope operation, we are close to the limit in some cases. This updates to track the current llama.cpp usage of GGML.
-
- 03 Nov, 2025 3 commits
-
-
Rajath Bail authored
-
Michael Yang authored
-
Ryan Coleman authored
-
- 02 Nov, 2025 1 commit
-
-
Attogram Project authored
-
- 31 Oct, 2025 4 commits
-
-
Jesse Gross authored
We pass invalid pointers when we check the size of the required compute graph before fitting. Some CUDA APIs validate these pointers but we can just skip them during this phase. cudaMemsetAsync is one of these that we weren't skipping but never took the code path that used it. Now that we have enabled op_offload, we can hit it in memory pressured situations.
-
Daniel Hiltgen authored
In CPU only setups the LibOllamaPath was omitted causing us not to load the ggml-cpu-XXX libraries during inference.
-
Daniel Hiltgen authored
This will help bubble up more crash errors
-
nicole pardal authored
This PR removes a redundant test from TestAPIEmbeddings Contents of this test already exists in embed_test.go and model_arch_test.go
-
- 30 Oct, 2025 11 commits
-
-
Daniel Hiltgen authored
On Windows AMD IDs are numeric, and can reorder based on the filter environment. By passing in the filter env on a full discovery refresh, we'll only look at the actual devices and ignore unsupported iGPUs. Without this, on some systems iGPU VRAM was incorrectly being used to populate the dGPU.
-
Jesse Gross authored
When a model is partially offloaded to system RAM, we can either do the calculations on the CPU or we can temporarily transfer the data to the GPU to do the calculations there. Small batches tend to be better on the CPU, large batches on the GPU. The llamarunner used the GPU in most cases and the ollamarunner used the CPU. Although the ollamarunner saw an improvement in token generation performance, there was a large performance hit in prompt processing (3-10x). There is an existing heuristic to dynamically switch between these two modes but in practice it doesn't have enough information to accurately make that decision. This adds authoritative data to make the check work to get the best of both worlds. Fixes #12037
-
Jesse Gross authored
We currently allocate the worst case batch for max sized batches, which corresponds to prompt processing. However, there are some cases where the generated graph is different for small and large batches. To ensure that we don't need to allocate memory later after layout has taken place, we should run the worst case batch both ways and take the larger amount of memory. This does not noticeably affect loading speed as the most expensive part of this logic is from image processing and that does not occur during token generation.
-
Daniel Hiltgen authored
windows gets confused when we try to hand the stderr file descriptor to the subprocess children. This ensures the log output always shows up.
-
Patrick Devine authored
-
Michael Yang authored
* ml(ggml): mrope * interleave mrope
-
Michael Yang authored
-
Michael Yang authored
this change fixes two bugs with `ollama rm`: 1. before a model is removed, it will first be stopped. this only happens for the first argument and skipped for all other models 2. models are unloaded indiscriminately. this errors for cloud models and should be omitted
-
Michael Yang authored
this change fixes images with an alpha channel by overlaying the image onto a white background
-
Michael Yang authored
* mulmat * permute
-
Athiban Sharon authored
Fixed broken docs links
-
- 29 Oct, 2025 5 commits
-
-
Grace authored
Eats extra whitespace at the end/beginning of content
-
Daniel Hiltgen authored
this should reduce zombies during integration runs
-
Patrick Devine authored
-
Michael Yang authored
-
Jeffrey Morgan authored
-