1. 10 Oct, 2025 4 commits
  2. 09 Oct, 2025 9 commits
  3. 08 Oct, 2025 4 commits
    • Patrick Devine's avatar
    • Jesse Gross's avatar
      kvcache: Clean up sliding window state with independent batches · 1fc35f12
      Jesse Gross authored
      Sliding windows models (e.g. gpt-oss, gemma3) remove tokens that
      are out of the cache's window each time we start a new forward pass.
      
      The cache storage needs to handle the window size for each sequence
      plus the batch size, since the batch needs to attend to the full
      window size. This means that we have greater than a window size
      stored while processing the batch.
      
      When the next batch comes, we are currently only looking at the
      sequences in the incoming batch to slide the window forward.
      However, we also need to clean up the other sequences that might
      be occupying space in the batch processing buffer to ensure each
      sequence is only using its window size of storage. Failure to do
      this can result in "no kv cache slot found" errors.
      
      Fixes: #10127
      1fc35f12
    • Jesse Gross's avatar
      discover: Disable flash attention for Jetson Xavier (CC 7.2) · aa45f7ce
      Jesse Gross authored
      GGML picks the wrong kernel and these systems fail with:
      Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437:
      ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu
      was compiled for: __CUDA_ARCH_LIST__
      
      Fixes #12442
      aa45f7ce
    • Daniel Hiltgen's avatar
      Integration test tuning (#12492) · 4e5d862e
      Daniel Hiltgen authored
      Remove some flaky scenarios, and switch to chat for better reliability
      4e5d862e
  4. 07 Oct, 2025 2 commits
    • Daniel Hiltgen's avatar
      303be930
    • Daniel Hiltgen's avatar
      Bring back escape valve for llm libraries and fix Jetpack6 crash (#12529) · bd15eba4
      Daniel Hiltgen authored
      * Bring back escape valve for llm libraries
      
      If the new discovery logic picks the wrong library, this gives users the
      ability to force a specific one using the same pattern as before. This
      can also potentially speed up bootstrap discovery if one of the libraries
      takes a long time to load and ultimately bind to no devices.  For example
      unsupported AMD iGPUS can sometimes take a while to discover and rule out.
      
      * Bypass extra discovery on jetpack systems
      
      On at least Jetpack6, cuda_v12 appears to expose the iGPU, but crashes later on in
      cublasInit so if we detect a Jetpack, short-circuit and use that variant.
      bd15eba4
  5. 06 Oct, 2025 3 commits
  6. 05 Oct, 2025 1 commit
  7. 04 Oct, 2025 2 commits
  8. 03 Oct, 2025 10 commits
  9. 02 Oct, 2025 4 commits
  10. 01 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Use runners for GPU discovery (#12090) · bc8909fb
      Daniel Hiltgen authored
      This revamps how we discover GPUs in the system by leveraging the Ollama
      runner.  This should eliminate inconsistency between our GPU discovery and the
      runners capabilities at runtime, particularly for cases where we try to filter
      out unsupported GPUs.  Now the runner does that implicitly based on the actual
      device list.  In some cases free VRAM reporting can be unreliable which can
      leaad to scheduling mistakes, so this also includes a patch to leverage more
      reliable VRAM reporting libraries if available.
      
      Automatic workarounds have been removed as only one GPU leveraged this, which
      is now documented. This GPU will soon fall off the support matrix with the next
      ROCm bump.
      
      Additional cleanup of the scheduler and discovery packages can be done in the
      future once we have switched on the new memory management code, and removed
      support for the llama runner.
      bc8909fb