1. 14 Oct, 2025 2 commits
  2. 13 Oct, 2025 5 commits
  3. 11 Oct, 2025 5 commits
  4. 10 Oct, 2025 10 commits
  5. 09 Oct, 2025 9 commits
  6. 08 Oct, 2025 4 commits
    • Patrick Devine's avatar
    • Jesse Gross's avatar
      kvcache: Clean up sliding window state with independent batches · 1fc35f12
      Jesse Gross authored
      Sliding windows models (e.g. gpt-oss, gemma3) remove tokens that
      are out of the cache's window each time we start a new forward pass.
      
      The cache storage needs to handle the window size for each sequence
      plus the batch size, since the batch needs to attend to the full
      window size. This means that we have greater than a window size
      stored while processing the batch.
      
      When the next batch comes, we are currently only looking at the
      sequences in the incoming batch to slide the window forward.
      However, we also need to clean up the other sequences that might
      be occupying space in the batch processing buffer to ensure each
      sequence is only using its window size of storage. Failure to do
      this can result in "no kv cache slot found" errors.
      
      Fixes: #10127
      1fc35f12
    • Jesse Gross's avatar
      discover: Disable flash attention for Jetson Xavier (CC 7.2) · aa45f7ce
      Jesse Gross authored
      GGML picks the wrong kernel and these systems fail with:
      Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437:
      ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu
      was compiled for: __CUDA_ARCH_LIST__
      
      Fixes #12442
      aa45f7ce
    • Daniel Hiltgen's avatar
      Integration test tuning (#12492) · 4e5d862e
      Daniel Hiltgen authored
      Remove some flaky scenarios, and switch to chat for better reliability
      4e5d862e
  7. 07 Oct, 2025 2 commits
    • Daniel Hiltgen's avatar
      303be930
    • Daniel Hiltgen's avatar
      Bring back escape valve for llm libraries and fix Jetpack6 crash (#12529) · bd15eba4
      Daniel Hiltgen authored
      * Bring back escape valve for llm libraries
      
      If the new discovery logic picks the wrong library, this gives users the
      ability to force a specific one using the same pattern as before. This
      can also potentially speed up bootstrap discovery if one of the libraries
      takes a long time to load and ultimately bind to no devices.  For example
      unsupported AMD iGPUS can sometimes take a while to discover and rule out.
      
      * Bypass extra discovery on jetpack systems
      
      On at least Jetpack6, cuda_v12 appears to expose the iGPU, but crashes later on in
      cublasInit so if we detect a Jetpack, short-circuit and use that variant.
      bd15eba4
  8. 06 Oct, 2025 3 commits