1. 17 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      test: harden scheduler tests (#12662) · 68e04c7f
      Daniel Hiltgen authored
      * test: harden scheduler tests
      
      This removes reschedDelay which was stale code, and adds
      a new configurable timeout for the waitForVRAMRecovery so
      tests can now set the timeout to be very short to avoid the
      scheduler getting stuck and hitting a test timeout.
      
      * test: tune tests for partial loads
      
      Give stress tests more time when the model is split between CPU/GPU
      68e04c7f
  2. 10 Oct, 2025 1 commit
  3. 09 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      logs: quiet down context canceled on completion and scheduler noise (#12553) · 15e3611d
      Daniel Hiltgen authored
      * logs: quiet down context canceled on completion
      
      If the client closes the connection before Completion finishes, we were
      logging at error level implying the runner crashed which was misleading.
      
      time=2025-10-08T22:59:20.566-07:00 level=ERROR source=server.go:1490 msg="post predict" error="Post \"http://127.0.0.1:57736/completion\": context canceled"
      
      * quiet down scheduler log error on expected case
      
      Since we don't hold the lock while performing memory load calculations, other
      runners can unload in parallel, so finding no runner to unload is a valid scenario
      which we shouldn't log at error level.
      15e3611d
  4. 01 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Use runners for GPU discovery (#12090) · bc8909fb
      Daniel Hiltgen authored
      This revamps how we discover GPUs in the system by leveraging the Ollama
      runner.  This should eliminate inconsistency between our GPU discovery and the
      runners capabilities at runtime, particularly for cases where we try to filter
      out unsupported GPUs.  Now the runner does that implicitly based on the actual
      device list.  In some cases free VRAM reporting can be unreliable which can
      leaad to scheduling mistakes, so this also includes a patch to leverage more
      reliable VRAM reporting libraries if available.
      
      Automatic workarounds have been removed as only one GPU leveraged this, which
      is now documented. This GPU will soon fall off the support matrix with the next
      ROCm bump.
      
      Additional cleanup of the scheduler and discovery packages can be done in the
      future once we have switched on the new memory management code, and removed
      support for the llama runner.
      bc8909fb
  5. 17 Sep, 2025 1 commit
  6. 14 Aug, 2025 1 commit
    • Jesse Gross's avatar
      llm: New memory management · d5a0d8d9
      Jesse Gross authored
      This changes the memory allocation strategy from upfront estimation to
      tracking actual allocations done by the engine and reacting to that. The
      goal is avoid issues caused by both under-estimation (crashing) and
      over-estimation (low performance due to under-utilized GPUs).
      
      It is currently opt-in and can be enabled for models running on the
      Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
      cases is unchanged and will continue to use the existing estimates.
      d5a0d8d9
  7. 11 Aug, 2025 1 commit
    • Daniel Andersen's avatar
      sched: Add support for grouping GPUs (#10678) · ea7657b5
      Daniel Andersen authored
      This patch modifies Ollama to allow grouping GPUs to memory-fit to the requested model, instead of the former algorithm of using one GPU distributing over all available GPUs.
      
      Benefits:
       - Lower amount of (PCIe-)bus communication between GPUs - especially when they are not very high speed
       - Allowing unallocated GPUs to get into power-saving mode.
       - Significantly reduce VRAM allocation when using more than 2 GPUs in a system
       - Due to the reduced memory allocation, you can run more models simultaneously.
      ea7657b5
  8. 08 Jul, 2025 1 commit
    • Daniel Hiltgen's avatar
      Reduce default parallelism to 1 (#11330) · 20c3266e
      Daniel Hiltgen authored
      The current scheduler algorithm of picking the paralellism based on available
      VRAM complicates the upcoming dynamic layer memory allocation algorithm.  This
      changes the default to 1, with the intent going forward that parallelism is
      explicit and will no longer be dynamically determined.  Removal of the dynamic
      logic will come in a follow up.
      20c3266e
  9. 22 May, 2025 1 commit
  10. 14 May, 2025 1 commit
  11. 07 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      sched: fix race leading to orphaned runners (#10599) · 5e380c3b
      Daniel Hiltgen authored
      If a model is loading, and the request context is canceled during the load
      by a client closing the connection, and another request is inbound for the
      same model with a different configuration (context size, etc.) thus requiring
      a reload, two unload events can be in flight.  The first shuts down the
      original model load, but the second one caused the loss of the new
      reloading runner reference, thus triggering the leak.
      
      The primary fix is detecting the duplicate unload and ignoring the second
      instance.  The load routine is also hardened to ensure we detect
      clobbering an already present runner and unload it with a warning.
      5e380c3b
  12. 05 May, 2025 1 commit
  13. 03 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      sched: logging improvements (#10550) · 76ea735a
      Daniel Hiltgen authored
      This enhances our logging in the scheduler.  The initial "waiting for server" log
      no longer claims an initial error state (now "not responding" which better reflects
      the actual state).  Runners now have slog wiring to report more details about the
      runner, including PID.
      76ea735a
  14. 30 Apr, 2025 1 commit
    • Daniel Hiltgen's avatar
      Fix "Stopping..." scheduler hang (#10487) · 415c8fcc
      Daniel Hiltgen authored
      * Adjust initial scheduler refCount
      
      Ensure we only set the refCount on success
      
      * sched: fix lock order inversion deadlock
      
      Under certain race conditions, there was a scenario where the scheduler would
      get into a deadlock while trying to update free space information while a model
      was trying to unload.
      415c8fcc
  15. 29 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      lower default num parallel to 2 · fe5b9bb2
      Devon Rifkin authored
      this is in part to "pay" for #10452, which doubled the default context length. The combination isn't fully neutral though, because even though the old 4x2k limit and the new 2x4k limit are memory equivalent, the 1x fallback is larger with 4k
      fe5b9bb2
  16. 28 Apr, 2025 1 commit
  17. 27 Apr, 2025 1 commit
  18. 22 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      increase default context length to 4096 (#10364) · 424f6486
      Devon Rifkin authored
      * increase default context length to 4096
      
      We lower the default numParallel from 4 to 2 and use these "savings" to
      double the default context length from 2048 to 4096.
      
      We're memory neutral in cases when we previously would've used
      numParallel == 4, but we add the following mitigation to handle some
      cases where we would have previously fallen back to 1x2048 due to low
      VRAM: we decide between 2048 and 4096 using a runtime check, choosing
      2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
      purposefully don't check the available VRAM because we don't want the
      context window size to change unexpectedly based on the available VRAM.
      
      We plan on making the default even larger, but this is a relatively
      low-risk change we can make to quickly double it.
      
      * fix tests
      
      add an explicit context length so they don't get truncated. The code
      that converts -1 from being a signal for doing a runtime check isn't
      running as part of these tests.
      
      * tweak small gpu message
      
      * clarify context length default
      
      also make it actually show up in `ollama serve --help`
      424f6486
  19. 09 Apr, 2025 1 commit
  20. 02 Apr, 2025 1 commit
  21. 01 Apr, 2025 1 commit
  22. 26 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Support heterogeneous KV cache layer sizes in memory estimation · f66216e3
      Jesse Gross authored
      Gemma3 uses sliding windows for its context on 5/6 layers, significantly
      reducing memory usage but leading to uneven usage across layers,
      which makes allocation to the correct GPU difficult. We currently
      estimate very conservatively by assuming all layers are consistent
      at the max size.
      
      Llama3.2-vision is also inconsistent between self attention and cross
      attention layers - at moment, we calculate the correct total size
      and then average this across layers. In some cases, this may lead
      to crashes if a large layer is placed on a GPU sized by the average.
      
      This allows memory estimation to calculate per-layer KV cache size
      and take this account when placing layers onto GPUs. We already do
      this for weights that vary per-tensor, so this is a logical extension.
      
      Fixes #9730
      Fixes #9890
      f66216e3
  23. 20 Feb, 2025 1 commit
  24. 14 Feb, 2025 1 commit
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  25. 10 Dec, 2024 1 commit
  26. 06 Nov, 2024 1 commit
    • Jesse Gross's avatar
      sched: Lift parallel restriction for multimodal models except mllama · 6cd56687
      Jesse Gross authored
      The Go runner does not have a problem with supporting parallel
      requests for most multimodal models. Now that we won't be potentially
      falling back to server.cpp, this restriction can be lifted.
      
      However, the new mllama model can't support parallel requests, so we
      will need to keep a restriction for that.
      6cd56687
  27. 17 Oct, 2024 1 commit
  28. 11 Sep, 2024 1 commit
  29. 22 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix embeddings memory corruption (#6467) · 90ca8417
      Daniel Hiltgen authored
      * Fix embeddings memory corruption
      
      The patch was leading to a buffer overrun corruption.  Once removed though, parallism
      in server.cpp lead to hitting an assert due to slot/seq IDs being >= token count.  To
      work around this, only use slot 0 for embeddings.
      
      * Fix embed integration test assumption
      
      The token eval count has changed with recent llama.cpp bumps (0.3.5+)
      90ca8417
  30. 18 Aug, 2024 2 commits
  31. 17 Aug, 2024 1 commit
  32. 13 Aug, 2024 1 commit
    • Michael Yang's avatar
      lint · 2697d7f5
      Michael Yang authored
      - fixes printf: non-constant format string in call to fmt.Printf
      - fixes SA1032: arguments have the wrong order
      - disables testifylint
      2697d7f5
  33. 02 Aug, 2024 1 commit
  34. 30 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Prevent partial loading on mixed GPU brands · 34542099
      Daniel Hiltgen authored
      In mult-brand GPU setups, if we couldn't fully load the model we
      would fall through the scheduler and mistakenly try to load across
      a mix of brands.  This makes sure we find the set of GPU(s) that
      best fit for the partial load.
      34542099
  35. 22 Jul, 2024 4 commits
  36. 11 Jul, 2024 1 commit