1. 03 Apr, 2025 2 commits
  2. 02 Apr, 2025 5 commits
  3. 01 Apr, 2025 4 commits
  4. 31 Mar, 2025 5 commits
    • Bruce MacDonald's avatar
      runner: clear cache when shift is not possible (#9433) · 66b25392
      Bruce MacDonald authored
      Clear KV cache when shift operation is not supported by model.
      Added KvCacheCanShift() check to handle models that can't perform cache shifts,
      falling back to full cache clear while preserving logical token history to
      maintain expected behavior when context window fills up.
      66b25392
    • Blake Mizerany's avatar
      server/internal/client/ollama: cache completed chunks (#9933) · ef27d52e
      Blake Mizerany authored
      This change adds tracking of download chunks during the pull process so
      that subsequent pulls can skip downloading already completed chunks.
      This works across restarts of ollama.
      
      Currently, download state will be lost if a prune is triggered during a
      pull (e.g. restart or remove). This issue should be addressed in a
      follow-up PR.
      ef27d52e
    • Jesse Gross's avatar
      runner: Release semaphore and improve error messages on failures · b2a46529
      Jesse Gross authored
      If we have an error after creating a new sequence but before
      finding a slot for it, we return without releasing the semaphore.
      This reduces our parallel sequences and eventually leads to deadlock.
      
      In practice this should never happen because once we have acquired
      the semaphore, we should always be able to find a slot. However, the
      code is clearly not correct.
      b2a46529
    • Jesse Gross's avatar
      ollamarunner: Ensure batch size limits are not exceeded · 5d097277
      Jesse Gross authored
      With the llama runner, we can generate up to NUM_PARALLEL batches
      at once, which will then get broken up to into individual batches
      to get executed by llama.cpp (i.e. we add up to 2048 tokens and
      this gets split into 4 batches of 512 tokens at default settings).
      
      This splitting can improve parallelism on multi-GPU systems because
      the individual batches can move though the pipeline without blocking
      on the first one to fully complete. However, we don't yet support
      this in the Ollama runner, partially because it makes it hard to
      enforce model-specified batch constraints, which didn't exist
      previously.
      
      The result is that we will try to execute the full, unsplit batch.
      This could result in out of memory or insufficient KV cache space
      errors.
      
      This triggers batch breaking when the total inputs from all sequences
      exceeds the batch size, rather than per-sequence. In order to ensure
      fairness, it also reintroduces round-robinning around sequences so
      that we don't let one busy sequence starve the others.
      5d097277
    • Leandro Borges Ferreira's avatar
  5. 28 Mar, 2025 1 commit
  6. 27 Mar, 2025 3 commits
  7. 26 Mar, 2025 5 commits
    • molbal's avatar
      e5d84fb9
    • Hengky Steen's avatar
      docs: add ollamb to community projects · dd66712e
      Hengky Steen authored
      dd66712e
    • Jesse Gross's avatar
      ggml: Support heterogeneous KV cache layer sizes in memory estimation · f66216e3
      Jesse Gross authored
      Gemma3 uses sliding windows for its context on 5/6 layers, significantly
      reducing memory usage but leading to uneven usage across layers,
      which makes allocation to the correct GPU difficult. We currently
      estimate very conservatively by assuming all layers are consistent
      at the max size.
      
      Llama3.2-vision is also inconsistent between self attention and cross
      attention layers - at moment, we calculate the correct total size
      and then average this across layers. In some cases, this may lead
      to crashes if a large layer is placed on a GPU sized by the average.
      
      This allows memory estimation to calculate per-layer KV cache size
      and take this account when placing layers onto GPUs. We already do
      this for weights that vary per-tensor, so this is a logical extension.
      
      Fixes #9730
      Fixes #9890
      f66216e3
    • Jesse Gross's avatar
      llm: Fix debug logging for memory estimates · f4f0992b
      Jesse Gross authored
      f4f0992b
    • Jesse Gross's avatar
      kvcache: Sliding window cache only needs a single batch total · 1feff619
      Jesse Gross authored
      When computing the size of the cache for sliding window attention,
      we don't need to multiple the batch size by the number of parallel
      sequences - the batch size is constant.
      
      This also simplifies the check for whether to allocate the cache
      size based on capacity or window size as the batch size is already
      incorporated into the capacity when handled by the runner.
      1feff619
  8. 25 Mar, 2025 1 commit
  9. 24 Mar, 2025 1 commit
  10. 21 Mar, 2025 12 commits
  11. 20 Mar, 2025 1 commit
    • Jesse Gross's avatar
      model: Pass input tensor instead of raw data to models · 0fbfcf3c
      Jesse Gross authored
      Rather than directly giving the input data to models, we can
      pass a tensor instead. In the short term, this saves some duplicated
      code.
      
      Longer term, we will want to overlap setting up the next batch with
      processing of the current one. In this case, we will only have the
      shape of tensor but it will not be loaded with data at the time of
      graph generation. By passing only a tensor to models now, we set up
      this possibility and prevent them from relying on data that they won't
      have in the future.
      
      Although the same could be done for Positions and Outputs, in some
      cases we either need the raw input data or don't use them at all.
      Therefore, for now we leave them as they are and allow models to
      convert them to tensors as needed.
      0fbfcf3c