- 18 Oct, 2025 2 commits
-
-
Daniel Hiltgen authored
Co-authored-by:Michael Yang <git@mxy.ng>
-
Daniel Hiltgen authored
When loading the dynamic libraries, if something goes wrong report some details. Unfortunately this wont explain which dependencies are missing, but this breadcrumb in the logs should help us diagnose GPU discovery failures.
-
- 17 Oct, 2025 1 commit
-
-
Daniel Hiltgen authored
* test: harden scheduler tests This removes reschedDelay which was stale code, and adds a new configurable timeout for the waitForVRAMRecovery so tests can now set the timeout to be very short to avoid the scheduler getting stuck and hitting a test timeout. * test: tune tests for partial loads Give stress tests more time when the model is split between CPU/GPU
-
- 16 Oct, 2025 11 commits
-
-
Daniel Hiltgen authored
8.7 is Jetpack only, so no need on x86 builds 10.3 covers [G]B300
-
Jeffrey Morgan authored
Adds a temporary global flag to renderers that causes renderers to always render images as [img]. In a follow up change, we will consider making this the default, and this flag could eventually be removed
-
Grace authored
* changing initial status to take into consideration prefill * Add seperate strings for content and thinking builder * thinking tests * remove white space from string before closing think tag
-
Daniel Hiltgen authored
Forward compat on the newer driver doesn't seem to be working. This should get 5.2 working on newer drivers again.
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
The workaround has been moved into the underlying C++ code. This reverts commit e4340667.
-
Thomas Stocker authored
* vulkan: Get FilterID from Backend for Vulkan * Fixing patch
-
weedge authored
-
zhetaicheleba authored
-
Devon Rifkin authored
openai: make tool call conversion fns public
-
Devon Rifkin authored
-
- 15 Oct, 2025 7 commits
-
-
Daniel Hiltgen authored
Initially Vulkan support in Ollama will require building from source. Once it is more thoroughly tested and we have fixed any critical bugs, then we can bundle Vulkan into the official binary releases.
-
Daniel Hiltgen authored
-
Santosh Bhavani authored
* Simplify NVML fallback for unified memory GPUs Remove device-specific checks and environment variable dependency for NVML_ERROR_NOT_SUPPORTED fallback. When NVML doesn't support memory queries, unconditionally use /proc/meminfo instead of checking device names or OLLAMA_UNIFIED_MEMORY environment variable. This provides better memory reporting by using MemAvailable which accounts for reclaimable memory, avoiding the underreporting issue described in NVIDIA support article a_id/5728. Tested on NVIDIA GB10 unified memory iGPU with consistent and accurate memory reporting across multiple model load/unload cycles. * Add NVML fallback patch for unified memory GPUs
-
Jesse Gross authored
-
Jeffrey Morgan authored
-
Parth Sareen authored
-
Jesse Gross authored
Currently, if you set num_gpu then this forces the model to load with that number of layers in the current configuration. This is done regardless of any other information, which means that no eviction is performed even if another model is loaded. This behavior is different from the old estimates (and still happens for models that runs on the llama engine). In those cases, models would be evicted if needed to load at the requested number of layers. That behavior is more useful and less surprising, so this changes the new estimates to match. Fixes #12580
-
- 14 Oct, 2025 6 commits
-
-
Devon Rifkin authored
qwen3-coder: support anyOf when parsing tool calls
-
Devon Rifkin authored
-
Daniel Hiltgen authored
On the llama runner, after the recent GGML bump a new log line reports incorrect 0 MiB free after our patch to remove memory from the props. This adjusts the llama.cpp code to fetch the actual free memory of the active device.
-
Thomas Stocker authored
* implement the vulkan C backend * add support in gpu.go * add support in gen_linux.sh * it builds * fix segfault * fix compilation * fix free memory monitor * fix total memory monitor * update gpu.go * fix build * fix check_perfmon len * remove cap_get_bound check * fix vulkan handle releasing * fix build on federa 40 * fix vulkan on windows * making amdgpu work on arm achitecutre with vulkan * add x86_64 lines in VulkanGlobs and capLinuxGlobs * add aarch64 lines in vulkanGlobs and capLinuxGlobs * Fix variable name * Add vulkan build patch from @jmorganca * Sync vendored ggml to add Vulkan support * Updated dockerfile https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871 Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Installing rocm library Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * This version works well built based on this: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871 Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Applied 00-fix-vulkan-building.patch Work done by McBane87 here: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871 Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Fixed the "detached head" issues Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Merged in the right direction Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Merging the latest stable (#2) * Applied 00-fix-vulkan-building.patch * Implemented vulkan backend based on the work done by whyvl, Dts0, McBane87 and others Tested on AMD Ryzen 7 8845HS w/ Radeon 780M Graphics with ROCm disabled ``` [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-03-11T13:00:40.793Z level=INFO source=gpu.go:199 msg="vulkan: load libvulkan and libcap ok" time=2025-03-11T13:00:40.877Z level=INFO source=gpu.go:421 msg="error looking up vulkan GPU memory" error="device is a CPU" time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" time=2025-03-11T13:00:40.879Z level=INFO source=types.go:137 msg="inference compute" id=0 library=vulkan variant="" compute=1.3 driver=1.3 name="AMD Radeon Graphics (RADV GFX1103_R1)" total="15.6 GiB" available="15.6 GiB" ``` ``` # ollama run phi4:14b >>> /set verbose Set 'verbose' mode. >>> how's it going? Hello! I'm here to help you with any questions or tasks you have. How can I assist you today?
😊 total duration: 3.341959745s load duration: 18.165612ms prompt eval count: 15 token(s) prompt eval duration: 475ms prompt eval rate: 31.58 tokens/s eval count: 26 token(s) eval duration: 2.846s eval rate: 9.14 tokens/s >>> ``` * This is no longer needed Signed-off-by:Vadim Grinco <vadim@grinco.eu> * Fixes SIGSEGV: segmentation violation running gemma3 models on ollama 0.6.0 #21 Patch provided by McBane87 on https://github.com/whyvl/ollama-vulkan/issues/21 Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Applied 04-disable-mmap-vulkan.patch From: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871 Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Pulled new upstream code for ggml-bulkan backend Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Merged latest ollama 0.6.2 and nasrally's Flash Attention patches (#5) * readme: add Ellama to list of community integrations (#9800) * readme: add screenpipe to community integrations (#9786) * Add support for ROCm gfx1151 (#9773) * conditionally enable parallel pipelines * sample: make mutations in transforms explicit (#9743) * updated minP to use early exit making use of sorted tokens * ml/backend/ggml: allocate memory with malloc when loading model (#9822) * runner: remove cache prompt flag from ollama runner (#9826) We do not need to bypass the prompt caching in the ollama runner yet, as only embedding models needed to bypass the prompt caching. When embedding models are implemented they can skip initializing this cache completely. * ollamarunner: Check for minBatch of context space when shifting Models can specify that a group of inputs need to be handled a single batch. However, context shifting didn't respect this and could trigger a break anyways. In this case, we should instead trigger a context shift earlier so that it occurs before the grouped batch. Note that there still some corner cases: - A long prompt that exceeds the context window can get truncated in the middle of an image. With the current models, this will result in the model not recognizing the image at all, which is pretty much the expected result with truncation. - The context window is set less than the minimum batch size. The only solution to this is to refuse to load the model with these settings. However, this can never occur with current models and default settings. Since users are unlikely to run into these scenarios, fixing them is left as a follow up. * Applied latest patches from McBane87 See this for details: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2708820861 Signed-off-by:
Vadim Grinco <vadim@grinco.eu> * Add ability to enable flash attention on vulkan (#4 ) * discover: add flash attention handling for vulkan * envconfig: fix typo in config.go As part of the process some code was refactored and I added a new field FlashAttention to GpuInfo since the previous solution didn't allow for a granular check via vulkan extensions. As a side effect, this now allows for granular per-device FA support checking in other places --------- Signed-off-by:
Vadim Grinco <vadim@grinco.eu> Co-authored-by:
zeo <108888572+zeozeozeo@users.noreply.github.com> Co-authored-by:
Louis Beaumont <louis.beaumont@gmail.com> Co-authored-by:
Daniel Hiltgen <dhiltgen@users.noreply.github.com> Co-authored-by:
Michael Yang <mxyng@pm.me> Co-authored-by:
Parth Sareen <parth.sareen@ollama.com> Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> Co-authored-by:
Bruce MacDonald <brucewmacdonald@gmail.com> Co-authored-by:
Jesse Gross <jesse@ollama.com> Co-authored-by:
Nikita <50599445+nasrally@users.noreply.github.com> * Revert Readme changes * Revert * Revert changes in amd_linux.go * Revert changes in amd_linux.go * Remove flashattention setting gpu.go * Revert whitespace changes in gpu.go * Revert changes in transforms_test.go * Revert changes in runner.go * Revert changes in Makefile.sync * Revert some unintented changes in Dockerfile * Revert vulkan copy changes in Dockerfile * Update Vulkan Code to de4c07f93783a1a96456a44dc16b9db538ee1618 * Fixed duplicate sync in ggml.go * Revert changes in ggml.go * Revert chnages in ggml.go * enable falsh attention on vulkan * revert remove parenthesis * fixed flash attention logic enabling * vk_check_flash_attention 0 means supported * Update gpu.go * Add vulkan to Windows Build script * Remove commented out code * Enable Vulkan Flash attention in FlashAttentionSupported * Fix logging * Update Vulkan backend to e54d41befcc1575f4c898c5ff4ef43970cead75f * Removed libcap related code libcap is not directly related to Vulkan and should be added by its own PR. It adds additional library dependencies for building and also requires users to run setcap or run ollama as root, which is not ideal for easy use * Fix Unit Test (Add Vulkan Library) * Add vulkan to TestHomogeneousGPUs Test * vulkan: get GPU ID (ollama v0.11.5) Signed-off-by:
Xiaodong Ye <xiaodong.ye@mthreads.com> * disable mmap for vulkan * Reduce Changes remove TestHomogeneousGPUs (doesn't exist on master) * Update vulkan version to the version used in llama.cpp * rename gpu patch to correct number * added Vulkan API to get correct Device UUID current UUID from pipelineCacheUUID does not match CUDA * Fix GPU ID Patch * Remove Code not in llama.cpp * modified UUID code inside ggml * Fix Patch * Copied minimal definition from vulkan header * Fix compile error in Mac Metal is preferred so we're disabling Vulkan for now * Removed unused code Fix linter error in CI * Fix patches apply * fixing lint error * Removed unneeded function call Somehow removing this call fixed the crashing when Vulkan header was removed * added missing NL * Fixed missing members in Vulkan header also added zero clear for some structs * Fixed wrong structure ID * Fixed Vulkan header More aligned with official header definition now * buildvulkanAsSeperateFunction * Vulkan on Windows Test * temporarly comment out gate to run windows task * use temporarly windows-latest for build * Commenting out other presets to build vulkan * reenable cpu * commenting out error action stop * temporarly commenting out rocm * set vulkan path * comment out cude for faster turnaround * correct vulkan install * correct vulkan silent install * fixed install command * revert debugging changes (vulkan builds on windows) * revert windows-latest * trying to build vulkan for linux * temporarly disable cuda and rocm * try again linux build * fix version * trying to fix * trying again * trying again * fix version * fixed vulkan-sdk name * try again * trying again * try without version number * try again * add some more extra * trying to use version 1.4.313 * revert debugging changes * Filter out already supported gpus * revert debug code * Use runners for GPU discovery This revamps how we discover GPUs in the system by leveraging the Ollama runner. This should eliminate inconsistency between our GPU discovery and the runners capabilities at runtime, particularly for cases where we try to filter out unsupported GPUs. Now the runner does that implicitly based on the actual device list. In some cases free VRAM reporting can be unreliable which can leaad to scheduling mistakes, so this also includes a patch to leverage more reliable VRAM reporting libraries if available. Automatic workarounds have been removed as only one GPU leveraged this, which is now documented. This GPU will soon fall off the support matrix with the next ROCm bump. Additional cleanup of the scheduler and discovery packages can be done in the future once we have switched on the new memory management code, and removed support for the llama runner. * timing info for runner * WIP - wire up Vulkan with the new engine based discovery Not a complete implementation - free VRAM is better, but not accurate on windows * fix - trust the library paths from discovery when starting runner * fix index bug * fix vulkan ids to be underlying * fix - give bootstrapping more time on slow systems * Test if Vulkan device is supported * vk_check_flash_attention is not needed (coompat2 coopmapt and scalar implementation exist) * Handle GGML_VK_VISIBLE_DEVICES * ask for supported first * win: fix CPU query buffer handling Try in a short loop until we get the size right. * test: harden integration tests for slow start If the server takes a while to start up, block tests from starting until it's online to avoid setting large timeouts in individual test cases. * gofumpt fix * fix build * merge fixes * merge fixes * fixed build * merge fixes * fixing build * fixed build * fixed formatting * fixed build * fix vulkan gpu id patch * sync llama.cpp vulkan code * update build windows script * merge fixes * fix format * fixed vulkan casing * handle igpu as gpu * improve case * print out unknown library * rturn Vulkan for vulkan library * Revert "rturn Vulkan for vulkan library" This reverts commit 690461a12fd5e93295d174c97edefb2bc33285b1. * fixed patch number * return Library Name * remvoe debug code * return integrated in vulkan backend * Return pci Properties * update patch * directly get pci proeprties without parsing * workaround for filtering devices. Correct way is to have a LibraryPosition Parameter in the deviceInfo * Revert "directly get pci proeprties without parsing" This reverts commit 8e0624851f5ed7d9f74518f574dfb422e4dd4dc2. * Set FilteredID for Environment Filtering * ROCm Library is named ROCm * revert changes in patch * Create 0028-vulkan-pci-and-memory.patch * vulkan memory patch * casing fix * Add more pci properties * Added better memory management * Added better memory managament * fixed patch * Fixed patch * FilterID creation group by library * filter out vulkan supported by other gpu * fixing deviceid compare * Vulkan Fix FA coopmat1 invalid array indexing * Use everywhere the same Vulkan Version 1.4.321.1 * Remove unneeded patch * vulkan update * sync vulkan glsl files * only use for vulkan the filteredid (numeric device number) * simplify code --------- Signed-off-by:
Vadim Grinco <vadim@grinco.eu> Signed-off-by:
Xiaodong Ye <xiaodong.ye@mthreads.com> Co-authored-by:
pufferffish <github@bandersnatch.anonaddy.com> Co-authored-by: KOISHI KOMEIJI FROM TOUHOU 11 <fuck> Co-authored-by:
DSLstandard <qgeneral35@gmail.com> Co-authored-by:
pufferffish <me@windtfw.com> Co-authored-by:
yeongbba <yeongmo.lee@logpresso.com> Co-authored-by:
tomaThomas <tomathomas@mailbox.org> Co-authored-by:
Antoine Viallon <antoine@lesviallon.fr> Co-authored-by:
Vadim Grinco <vadim@grinco.eu> Co-authored-by:
zeo <108888572+zeozeozeo@users.noreply.github.com> Co-authored-by:
Louis Beaumont <louis.beaumont@gmail.com> Co-authored-by:
Daniel Hiltgen <dhiltgen@users.noreply.github.com> Co-authored-by:
Michael Yang <mxyng@pm.me> Co-authored-by:
Parth Sareen <parth.sareen@ollama.com> Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> Co-authored-by:
Bruce MacDonald <brucewmacdonald@gmail.com> Co-authored-by:
Jesse Gross <jesse@ollama.com> Co-authored-by:
Nikita <50599445+nasrally@users.noreply.github.com> Co-authored-by:
Masato Nakasaka <masato.nakasaka@intel.com> Co-authored-by:
Xiaodong Ye <xiaodong.ye@mthreads.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com>
-
Devon Rifkin authored
add registries for parsers/renderers
-
Devon Rifkin authored
-
- 13 Oct, 2025 5 commits
-
-
Grace authored
* working (other than tool call is the incorrect order) for tool calls and tools * Tests work, other than image tags (tests do not go through server) and tools (not in the correct order, but contents are the same) * testing for qwen3vl parser - toolparser is working * made changes to JSON tool parser, wraps the TollCallFunction with a TollCall object * Working parser for thinking models - assumes state of thinking, emits unambiguous content in thinking, does not call tool call in thinking * changed the parser to start with collecting content * thinking prefill * add hasThinkingSupport parameter to parser * qwen3-vl -> qwen3-vl-instruct for renderer/parser * Add hasThinkingSupport=false to QwenVLParser --------- Co-authored-by:Devon Rifkin <drifkin@drifkin.net>
-
Gabe Goodhart authored
Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) * feat: Bump llama.cpp to df1b612 Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * fix(mtmd): Correctly encode text chunks during mtmd tokenization There can be text chunks that appear interspersed with the image embeddings that contain template delimiter tokens for some models. These need to be correctly translated to text tokens. Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * tests: Use MtmdChunk in image_test Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * style: Fix unnecessary conversion linting Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * fix(ggml): Revert changes to ggml_hip.cpp These changes were done largely by our code assistant and are likely wrong Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * fix: Revert changes in mem_nvml.cpp Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * feat: Update sync point to 1deee0 This brings in several more optimization commits and model support for EmbeddingGemma Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * feat: Update patches for 1deee0 Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * feat: sync for bump to 1deee0 Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * fix: Bad patch updates with errant `+` Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * feat: Bump llama.cpp/ggml to 7049736 Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> * fix: format-patches after latest bump Branch: LlamaCPPBump-GraniteDocling Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com> --------- Signed-off-by:
Gabe Goodhart <ghart@us.ibm.com>
-
Jeffrey Morgan authored
-
Michael Yang authored
This reverts commit 3d32249c.
-
Michael Yang authored
deepseek's qwen3 distill uses a different rope scheme so support both
-
- 11 Oct, 2025 5 commits
-
-
Jeffrey Morgan authored
-
Devon Rifkin authored
routes: fix built-in renderers for `api/generate`
-
Devon Rifkin authored
Made it so when api/generate builds up a message array and generates the prompt it now goes through the same function as `api/chat` for consistency. This is where we hook the optional built-in renderers to bypass templates, which was missing for `api/generate` before this change. Closes: #12578
-
frob authored
-
Daniel Hiltgen authored
-
- 10 Oct, 2025 3 commits
-
-
Michael Yang authored
hardErrCh will deadlock since forwardBatch is blocked on computeStartedCh which never gets sent. since the response to hardErrCh is to panic, just panic instead
-
Daniel Hiltgen authored
* implement nvml for linux * Improve scheduler logging when VRAM doesn't recover
-
Michael Yang authored
-