1. 07 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Bring back escape valve for llm libraries and fix Jetpack6 crash (#12529) · bd15eba4
      Daniel Hiltgen authored
      * Bring back escape valve for llm libraries
      
      If the new discovery logic picks the wrong library, this gives users the
      ability to force a specific one using the same pattern as before. This
      can also potentially speed up bootstrap discovery if one of the libraries
      takes a long time to load and ultimately bind to no devices.  For example
      unsupported AMD iGPUS can sometimes take a while to discover and rule out.
      
      * Bypass extra discovery on jetpack systems
      
      On at least Jetpack6, cuda_v12 appears to expose the iGPU, but crashes later on in
      cublasInit so if we detect a Jetpack, short-circuit and use that variant.
      bd15eba4
  2. 06 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      discovery: prevent dup OLLAMA_LIBRARY_PATH (#12514) · 04c18498
      Daniel Hiltgen authored
      This variable isn't currently documented or intended as something the user can
      override, but if the user happens to set OLLAMA_LIBRARY_PATH we were doubling
      this in the subprocess environment which will cause problems with the new
      bootstrap discovery logic.
      04c18498
  3. 03 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Workaround broken NVIDIA iGPU free VRAM data (#12490) · e4340667
      Daniel Hiltgen authored
      The CUDA APIs for reporting free VRAM are useless on NVIDIA iGPU
      systems as they only return the kernels actual free memory and ignore
      buff/cache allocations which on a typical system will quickly fill up
      most of the free system memory.  As a result, we incorrectly think
      there's very little available for GPU allocations which is wrong.
      e4340667
  4. 02 Oct, 2025 1 commit
  5. 01 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Use runners for GPU discovery (#12090) · bc8909fb
      Daniel Hiltgen authored
      This revamps how we discover GPUs in the system by leveraging the Ollama
      runner.  This should eliminate inconsistency between our GPU discovery and the
      runners capabilities at runtime, particularly for cases where we try to filter
      out unsupported GPUs.  Now the runner does that implicitly based on the actual
      device list.  In some cases free VRAM reporting can be unreliable which can
      leaad to scheduling mistakes, so this also includes a patch to leverage more
      reliable VRAM reporting libraries if available.
      
      Automatic workarounds have been removed as only one GPU leveraged this, which
      is now documented. This GPU will soon fall off the support matrix with the next
      ROCm bump.
      
      Additional cleanup of the scheduler and discovery packages can be done in the
      future once we have switched on the new memory management code, and removed
      support for the llama runner.
      bc8909fb