1. 31 Jul, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Enable SWA to retain additional entries · 4183bb05
      Jesse Gross authored
      Models that use sliding window attention can only resume a sequence
      from the cache if it falls within the saved windows. This works well
      if the next message picks up where the old one left off. However, it
      generally prevents a partial prefix match unless the entire conversation
      falls within the sliding window.
      
      This can be a problem with reasoning models where the traces are
      supposed to be removed from future messages, forcing the entire
      history to be re-evaluated.
      
      This change allows models to specify that a larger amount of the
      history be retained in memory, to allow more partial resumption.
      It still respects the window that the model was trained on for
      token generation.
      4183bb05
  2. 30 Jul, 2025 3 commits
  3. 29 Jul, 2025 3 commits
  4. 28 Jul, 2025 1 commit
  5. 27 Jul, 2025 1 commit
  6. 25 Jul, 2025 2 commits
    • Jesse Gross's avatar
      kvcache: Group shift operations into batches · 764be748
      Jesse Gross authored
      Currently, when we need to do a shift on the cache, it is one
      RoPE operation on the entire size of the cache (per layer). In
      some cases, this can create a compute graph that is larger than
      the forward pass since the forward pass is working in batches.
      Since we don't consider shifting in our memory estimates, it's
      possible for this to cause a crash if we run out of memory.
      
      By limiting the size of the RoPE calls to batch size chunks, we
      ensure that the shift will never exceed the size of the forward
      pass, since the forward pass will also contain a RoPE of the same
      size. This does not have a sigificant impact on performance since
      RoPE is a math operation that is mostly proportional to the size
      of its inputs.
      
      In theory defrag could have the same issue since it also creates a
      compute graph outside of the forward pass, however, since it is
      only copies, it does not require any working space.
      764be748
    • Ruyut's avatar
      b72e5adb
  7. 24 Jul, 2025 2 commits
  8. 23 Jul, 2025 2 commits
  9. 22 Jul, 2025 2 commits
  10. 20 Jul, 2025 2 commits
  11. 19 Jul, 2025 1 commit
  12. 17 Jul, 2025 5 commits
  13. 16 Jul, 2025 3 commits
  14. 11 Jul, 2025 4 commits
  15. 09 Jul, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report ordinal IDs for AMD GPUs on Windows · 35fda7b4
      Jesse Gross authored
      We don't get valid UUIDs for AMD GPUs on Windows, so the best option
      is to use the ordinal IDs. This brings us in line with what we currently
      do on the Ollama server - the only exception is AMD GPUs on Linux, which
      falls back to using ordinal IDs. The GGML implementation has no fallback
      but it doesn't appear to occur for any of the GPUs that we support.
      
      It's also possible that there are collisions between ordinal IDs for
      different libraries - however the only places where we use them are
      AMD on Windows and Metal on Mac, which can never occur on the same
      system.
      35fda7b4
  16. 08 Jul, 2025 3 commits
    • Daniel Hiltgen's avatar
      doc: add MacOS docs (#11334) · 66fb8575
      Daniel Hiltgen authored
      also removes stale model dir instructions for windows
      66fb8575
    • Daniel Hiltgen's avatar
      Reduce default parallelism to 1 (#11330) · 20c3266e
      Daniel Hiltgen authored
      The current scheduler algorithm of picking the paralellism based on available
      VRAM complicates the upcoming dynamic layer memory allocation algorithm.  This
      changes the default to 1, with the intent going forward that parallelism is
      explicit and will no longer be dynamically determined.  Removal of the dynamic
      logic will come in a follow up.
      20c3266e
    • Daniel Hiltgen's avatar
      API/CLI context enhancements (#11331) · 34088dbc
      Daniel Hiltgen authored
      * API: expose context size of loaded models
      
      * CLI: add context UX
      
      This adds a column in the ps output to show the models context size.
      34088dbc
  17. 07 Jul, 2025 4 commits