1. 03 Apr, 2025 1 commit
  2. 02 Apr, 2025 1 commit
    • jmorganca's avatar
      kvcache: Add check for values that fall out of sliding window cache · b4297006
      jmorganca authored
      
      
      The sliding window cache trims entries that are outside the window for
      the latest token. This works when we are extending the cache, such as
      when the conversation continues. However, if we have a partial overlap
      in conversation (including the BOS tokens), then we resume from a past
      point in the conversation and the needed tokens are no longer stored
      in memory. This verifies that the new window overlaps with the old one
      before reusing the cache.
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      b4297006
  3. 27 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ml: Remove Output from Context interface · 01aa7887
      Jesse Gross authored
      Model implementations should use Input for all of their tensors
      supplied to the model. This includes tensors that relate to the
      outputs, which is confusing since there is also an Output funciton.
      
      Since Output is only used internally in GGML and not used by any
      model implementations, we can remove it from the interface to
      reduce confusion.
      01aa7887
  4. 21 Mar, 2025 2 commits
    • Jesse Gross's avatar
      kvcache: Optimize sliding window attention · 2d6eac90
      Jesse Gross authored
      Currently sliding window attention allocates and uses the full
      context size and just masks out any tokens that are outside of the
      window. However, we really only need (roughly) the sliding window
      size.
      
      At large context sizes this improves two things:
       - Memory allocated - since the fully context size is allocated up front,
         memory requirements drop substantially. On Gemma3:4b with a 32k
         context window, total memory usage (including weights and non-sliding
         layers) drops from ~20GB to ~8GB.
       - Computation - ranges that are completely outside of the sliding
         window are now removed from the tensors that are returned from the
         cache rather than simply being masked out. This results in more
         efficient processing, scaling with the size of the context that
         has actually been used.
      
      Notable, this does not update the scheduler for any model to be aware of
      the smaller memory requirements. This is difficult for Gemma3 because
      the layers are heterogeneous between sliding and non-sliding attention.
      As a result, while actual memory consumption will be reduced, the
      scheduler will over-estimate the requirements of the model. This means
      that splitting between GPUs or GPUs and CPUs will still be suboptimal.
      
      Bug #9730
      2d6eac90
    • Jesse Gross's avatar
      kvcache: Pass granular cache size into implementations · 3ed7ad3a
      Jesse Gross authored
      Currently the runner computes the kv size needed and creates a
      cache of that size. This is the context size times number of
      parallel sequences.
      
      Cache implementations can make better decisions about their memory
      usage, so instead pass in the required capacity, number of sequences
      and maximum batch size. For now, the causal cache just uses this to
      compute the size in the same way as before.
      3ed7ad3a
  5. 20 Mar, 2025 1 commit
  6. 11 Mar, 2025 4 commits
  7. 10 Mar, 2025 1 commit
    • Jesse Gross's avatar
      model: Update encoder cache to use multimodal input processing handler · a1cda80b
      Jesse Gross authored
      The encoder cache needs to know the position of images in the input
      stream so that it knows when to delete them. Previously images didn't
      have a position, so we implied one by breaking batches before an
      image and then assuming the image was in the first position. However,
      multimodal objects are now given explicit positions in the input
      stream, so we can use that instead.
      
      Breaking batches was also a way to simulate a cross attention mask
      for mllama. However, given that it only supports a single sequence
      and a single image, this mask doesn't serve any real purpose.
      Removing the batch break does not appear to affect the quality of
      the output.
      
      Most of this is simply moving the input data structures to a new
      package to avoid import cycles.
      a1cda80b
  8. 07 Mar, 2025 1 commit
  9. 02 Mar, 2025 1 commit
  10. 27 Feb, 2025 2 commits
  11. 14 Feb, 2025 2 commits
    • Daniel Hiltgen's avatar
      df2680b4
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03