1. 20 Nov, 2025 1 commit
  2. 19 Nov, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Use SetRows to store cache data · 53985b3c
      Jesse Gross authored
      We currently copy data into the KV cache in contiguous buffers using
      ggml_cpy(). ggml_set_rows() was introduced to allow scatter operation
      so that contiguous buffers are no longer required. The direct primary
      benefit of this is that we no longer need to perform defragmentation.
      
      However, GGML recently removed an optimization for ggml_cpy() and
      we picked it up in 544b6739 "ggml update to b6840 (#12791)". This
      caused a roughly 40% drop in token generation performance on CUDA
      due to CUDA graphs no longer being used. By switching to
      ggml_set_rows(), the original optimization is no longer necessary
      and CUDA performance is restored.
      
      Fixes #13112
      53985b3c
  3. 28 Oct, 2025 1 commit
  4. 08 Oct, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Clean up sliding window state with independent batches · 1fc35f12
      Jesse Gross authored
      Sliding windows models (e.g. gpt-oss, gemma3) remove tokens that
      are out of the cache's window each time we start a new forward pass.
      
      The cache storage needs to handle the window size for each sequence
      plus the batch size, since the batch needs to attend to the full
      window size. This means that we have greater than a window size
      stored while processing the batch.
      
      When the next batch comes, we are currently only looking at the
      sequences in the incoming batch to slide the window forward.
      However, we also need to clean up the other sequences that might
      be occupying space in the batch processing buffer to ensure each
      sequence is only using its window size of storage. Failure to do
      this can result in "no kv cache slot found" errors.
      
      Fixes: #10127
      1fc35f12
  5. 31 Jul, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Enable SWA to retain additional entries · 4183bb05
      Jesse Gross authored
      Models that use sliding window attention can only resume a sequence
      from the cache if it falls within the saved windows. This works well
      if the next message picks up where the old one left off. However, it
      generally prevents a partial prefix match unless the entire conversation
      falls within the sliding window.
      
      This can be a problem with reasoning models where the traces are
      supposed to be removed from future messages, forcing the entire
      history to be re-evaluated.
      
      This change allows models to specify that a larger amount of the
      history be retained in memory, to allow more partial resumption.
      It still respects the window that the model was trained on for
      token generation.
      4183bb05
  6. 22 May, 2025 2 commits
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
    • Jesse Gross's avatar
      ollamarunner: Memory usage reporting · 73d6a82c
      Jesse Gross authored
      This provides granular information about the backend memory allocations
      required by the runner:
       - Per backend
       - Per layer
       - Weights, cache and graph
       - Allocation status
      
      This can be used for debugging and validating memory estimates.
      73d6a82c
  7. 25 Apr, 2025 1 commit
  8. 18 Apr, 2025 1 commit
  9. 08 Apr, 2025 2 commits
    • Michael Yang's avatar
      kvcache: stub out test structs · d98bfe7e
      Michael Yang authored
      d98bfe7e
    • Jesse Gross's avatar
      ollamarunner: Preallocate worst case graph at startup · dbb149e6
      Jesse Gross authored
      Currently, the KV cache and graph are lazily allocated as needed.
      The cache is fully allocated on first use of the corresponding
      layer whereas the graph grows with the size of the context.
      
      This can be an issue if another application allocates more VRAM
      after we do our calculations - Ollama will crash in the middle of
      inference. If we instead allocate the maximum needed memory at
      startup of the runner, we will either succeed or fail at that point
      rather than at some surprising time in the future.
      
      Currently, this only generates a worst case batch for text, which
      means that vision models may get a partial allocation and continue
      to lazily allocate the rest.
      dbb149e6
  10. 03 Apr, 2025 2 commits
  11. 02 Apr, 2025 1 commit
    • jmorganca's avatar
      kvcache: Add check for values that fall out of sliding window cache · b4297006
      jmorganca authored
      
      
      The sliding window cache trims entries that are outside the window for
      the latest token. This works when we are extending the cache, such as
      when the conversation continues. However, if we have a partial overlap
      in conversation (including the BOS tokens), then we resume from a past
      point in the conversation and the needed tokens are no longer stored
      in memory. This verifies that the new window overlaps with the old one
      before reusing the cache.
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      b4297006
  12. 27 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ml: Remove Output from Context interface · 01aa7887
      Jesse Gross authored
      Model implementations should use Input for all of their tensors
      supplied to the model. This includes tensors that relate to the
      outputs, which is confusing since there is also an Output funciton.
      
      Since Output is only used internally in GGML and not used by any
      model implementations, we can remove it from the interface to
      reduce confusion.
      01aa7887
  13. 21 Mar, 2025 2 commits
    • Jesse Gross's avatar
      kvcache: Optimize sliding window attention · 2d6eac90
      Jesse Gross authored
      Currently sliding window attention allocates and uses the full
      context size and just masks out any tokens that are outside of the
      window. However, we really only need (roughly) the sliding window
      size.
      
      At large context sizes this improves two things:
       - Memory allocated - since the fully context size is allocated up front,
         memory requirements drop substantially. On Gemma3:4b with a 32k
         context window, total memory usage (including weights and non-sliding
         layers) drops from ~20GB to ~8GB.
       - Computation - ranges that are completely outside of the sliding
         window are now removed from the tensors that are returned from the
         cache rather than simply being masked out. This results in more
         efficient processing, scaling with the size of the context that
         has actually been used.
      
      Notable, this does not update the scheduler for any model to be aware of
      the smaller memory requirements. This is difficult for Gemma3 because
      the layers are heterogeneous between sliding and non-sliding attention.
      As a result, while actual memory consumption will be reduced, the
      scheduler will over-estimate the requirements of the model. This means
      that splitting between GPUs or GPUs and CPUs will still be suboptimal.
      
      Bug #9730
      2d6eac90
    • Jesse Gross's avatar
      kvcache: Pass granular cache size into implementations · 3ed7ad3a
      Jesse Gross authored
      Currently the runner computes the kv size needed and creates a
      cache of that size. This is the context size times number of
      parallel sequences.
      
      Cache implementations can make better decisions about their memory
      usage, so instead pass in the required capacity, number of sequences
      and maximum batch size. For now, the causal cache just uses this to
      compute the size in the same way as before.
      3ed7ad3a
  14. 20 Mar, 2025 1 commit
  15. 11 Mar, 2025 4 commits
  16. 10 Mar, 2025 1 commit
    • Jesse Gross's avatar
      model: Update encoder cache to use multimodal input processing handler · a1cda80b
      Jesse Gross authored
      The encoder cache needs to know the position of images in the input
      stream so that it knows when to delete them. Previously images didn't
      have a position, so we implied one by breaking batches before an
      image and then assuming the image was in the first position. However,
      multimodal objects are now given explicit positions in the input
      stream, so we can use that instead.
      
      Breaking batches was also a way to simulate a cross attention mask
      for mllama. However, given that it only supports a single sequence
      and a single image, this mask doesn't serve any real purpose.
      Removing the batch break does not appear to affect the quality of
      the output.
      
      Most of this is simply moving the input data structures to a new
      package to avoid import cycles.
      a1cda80b
  17. 07 Mar, 2025 1 commit
  18. 02 Mar, 2025 1 commit
  19. 27 Feb, 2025 2 commits
  20. 14 Feb, 2025 2 commits
    • Daniel Hiltgen's avatar
      df2680b4
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03