1. 22 May, 2025 2 commits
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
    • Jesse Gross's avatar
      ollamarunner: Memory usage reporting · 73d6a82c
      Jesse Gross authored
      This provides granular information about the backend memory allocations
      required by the runner:
       - Per backend
       - Per layer
       - Weights, cache and graph
       - Allocation status
      
      This can be used for debugging and validating memory estimates.
      73d6a82c
  2. 25 Apr, 2025 1 commit
  3. 18 Apr, 2025 1 commit
  4. 08 Apr, 2025 2 commits
    • Michael Yang's avatar
      kvcache: stub out test structs · d98bfe7e
      Michael Yang authored
      d98bfe7e
    • Jesse Gross's avatar
      ollamarunner: Preallocate worst case graph at startup · dbb149e6
      Jesse Gross authored
      Currently, the KV cache and graph are lazily allocated as needed.
      The cache is fully allocated on first use of the corresponding
      layer whereas the graph grows with the size of the context.
      
      This can be an issue if another application allocates more VRAM
      after we do our calculations - Ollama will crash in the middle of
      inference. If we instead allocate the maximum needed memory at
      startup of the runner, we will either succeed or fail at that point
      rather than at some surprising time in the future.
      
      Currently, this only generates a worst case batch for text, which
      means that vision models may get a partial allocation and continue
      to lazily allocate the rest.
      dbb149e6
  5. 03 Apr, 2025 2 commits
  6. 02 Apr, 2025 1 commit
    • jmorganca's avatar
      kvcache: Add check for values that fall out of sliding window cache · b4297006
      jmorganca authored
      
      
      The sliding window cache trims entries that are outside the window for
      the latest token. This works when we are extending the cache, such as
      when the conversation continues. However, if we have a partial overlap
      in conversation (including the BOS tokens), then we resume from a past
      point in the conversation and the needed tokens are no longer stored
      in memory. This verifies that the new window overlaps with the old one
      before reusing the cache.
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      b4297006
  7. 27 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ml: Remove Output from Context interface · 01aa7887
      Jesse Gross authored
      Model implementations should use Input for all of their tensors
      supplied to the model. This includes tensors that relate to the
      outputs, which is confusing since there is also an Output funciton.
      
      Since Output is only used internally in GGML and not used by any
      model implementations, we can remove it from the interface to
      reduce confusion.
      01aa7887
  8. 21 Mar, 2025 2 commits
    • Jesse Gross's avatar
      kvcache: Optimize sliding window attention · 2d6eac90
      Jesse Gross authored
      Currently sliding window attention allocates and uses the full
      context size and just masks out any tokens that are outside of the
      window. However, we really only need (roughly) the sliding window
      size.
      
      At large context sizes this improves two things:
       - Memory allocated - since the fully context size is allocated up front,
         memory requirements drop substantially. On Gemma3:4b with a 32k
         context window, total memory usage (including weights and non-sliding
         layers) drops from ~20GB to ~8GB.
       - Computation - ranges that are completely outside of the sliding
         window are now removed from the tensors that are returned from the
         cache rather than simply being masked out. This results in more
         efficient processing, scaling with the size of the context that
         has actually been used.
      
      Notable, this does not update the scheduler for any model to be aware of
      the smaller memory requirements. This is difficult for Gemma3 because
      the layers are heterogeneous between sliding and non-sliding attention.
      As a result, while actual memory consumption will be reduced, the
      scheduler will over-estimate the requirements of the model. This means
      that splitting between GPUs or GPUs and CPUs will still be suboptimal.
      
      Bug #9730
      2d6eac90
    • Jesse Gross's avatar
      kvcache: Pass granular cache size into implementations · 3ed7ad3a
      Jesse Gross authored
      Currently the runner computes the kv size needed and creates a
      cache of that size. This is the context size times number of
      parallel sequences.
      
      Cache implementations can make better decisions about their memory
      usage, so instead pass in the required capacity, number of sequences
      and maximum batch size. For now, the causal cache just uses this to
      compute the size in the same way as before.
      3ed7ad3a
  9. 20 Mar, 2025 1 commit
  10. 11 Mar, 2025 4 commits
  11. 10 Mar, 2025 1 commit
    • Jesse Gross's avatar
      model: Update encoder cache to use multimodal input processing handler · a1cda80b
      Jesse Gross authored
      The encoder cache needs to know the position of images in the input
      stream so that it knows when to delete them. Previously images didn't
      have a position, so we implied one by breaking batches before an
      image and then assuming the image was in the first position. However,
      multimodal objects are now given explicit positions in the input
      stream, so we can use that instead.
      
      Breaking batches was also a way to simulate a cross attention mask
      for mllama. However, given that it only supports a single sequence
      and a single image, this mask doesn't serve any real purpose.
      Removing the batch break does not appear to affect the quality of
      the output.
      
      Most of this is simply moving the input data structures to a new
      package to avoid import cycles.
      a1cda80b
  12. 07 Mar, 2025 1 commit
  13. 02 Mar, 2025 1 commit
  14. 27 Feb, 2025 2 commits
  15. 14 Feb, 2025 2 commits
    • Daniel Hiltgen's avatar
      df2680b4
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03