1. 31 May, 2025 1 commit
  2. 30 May, 2025 1 commit
  3. 29 May, 2025 3 commits
    • Jesse Gross's avatar
      ggml: Export GPU UUIDs · aaa78180
      Jesse Gross authored
      This enables matching up devices and information reported by the backend
      with system management libraries such as nvml to get accurate free
      memory reporting.
      aaa78180
    • Jesse Gross's avatar
      llm: Make "POST predict" error message more informative · f15ffc43
      Jesse Gross authored
      "POST predict" basically means that the runner has crashed, which
      can have many reasons. However, many people think this is a specific
      error and either report only this message or group together unrelated
      bugs. This replaces it with a more friendly and helpful message.
      f15ffc43
    • Devon Rifkin's avatar
      add thinking support to the api and cli (#10584) · 5f57b0ef
      Devon Rifkin authored
      - Both `/api/generate` and `/api/chat` now accept a `"think"`
        option that allows specifying whether thinking mode should be on or
        not
      - Templates get passed this new option so, e.g., qwen3's template can
        put `/think` or `/no_think` in the system prompt depending on the
        value of the setting
      - Models' thinking support is inferred by inspecting model templates.
        The prefix and suffix the parser uses to identify thinking support is
        also automatically inferred from templates
      - Thinking control & parsing is opt-in via the API to prevent breaking
        existing API consumers. If the `"think"` option is not specified, the
        behavior is unchanged from previous versions of ollama
      - Add parsing for thinking blocks in both streaming/non-streaming mode
        in both `/generate` and `/chat`
      - Update the CLI to make use of these changes. Users can pass `--think`
        or `--think=false` to control thinking, or during an interactive
        session they can use the commands `/set think` or `/set nothink`
      - A `--hidethinking` option has also been added to the CLI. This makes
        it easy to use thinking in scripting scenarios like
        `ollama run qwen3 --think --hidethinking "my question here"` where you
        just want to see the answer but still want the benefits of thinking
        models
      5f57b0ef
  4. 27 May, 2025 5 commits
  5. 26 May, 2025 1 commit
  6. 24 May, 2025 5 commits
  7. 23 May, 2025 2 commits
  8. 22 May, 2025 7 commits
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
    • Jesse Gross's avatar
      ollamarunner: Memory usage reporting · 73d6a82c
      Jesse Gross authored
      This provides granular information about the backend memory allocations
      required by the runner:
       - Per backend
       - Per layer
       - Weights, cache and graph
       - Allocation status
      
      This can be used for debugging and validating memory estimates.
      73d6a82c
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
    • Daniel Hiltgen's avatar
      sched: fix runner leak during reloading unload (#10819) · d950ff12
      Daniel Hiltgen authored
      When the same model is being reloaded rapidly with client connections
      being canceled before the model finishes loading, the queued unload
      event could cause a leak of runners by deleting a different runner from
      the loaded list.
      d950ff12
    • Michael Yang's avatar
      fix: mllama quality (#10807) · adff143b
      Michael Yang authored
      * fix mllama convert
      
      - transform attn_gate and ffn_gate
      - swap attention heads for vision models
      
      * fix mllama
      
      the mlp gate which was applied in the wrong place
      adff143b
    • Bruce MacDonald's avatar
      server: improve tensor quantization fallback logic (#10806) · fbe6ae28
      Bruce MacDonald authored
      Fall back to alternative quantization types when a tensor's dimensions aren't divisible by the block size required for the original desired quantization type. If retried quantization types fail, the system ultimately falls back to F16 (half-precision floating point) which has a block size of 1 and can handle any tensor dimension.
      fbe6ae28
    • Daniel Hiltgen's avatar
      integration: add qwen2.5-vl (#10815) · fdd4d479
      Daniel Hiltgen authored
      Replace the older llava model with qwen2.5 for vision tests
      Skip split-batch test on small VRAM systems to avoid excessive test time
      fdd4d479
  9. 21 May, 2025 7 commits
  10. 20 May, 2025 2 commits
  11. 19 May, 2025 6 commits
    • Michael Yang's avatar
      fix llama and mistral3 models (#10774) · ff180c34
      Michael Yang authored
      * fix llama model
      
      * fix mistral3.1 model
      
      do not set default vision layers
      ff180c34
    • Jesse Gross's avatar
      llm: Use first layer as memory buffer in estimation · 3fe74fba
      Jesse Gross authored
      This is a partial revert of 0478d440 "Fixed over vram allcation dure to
      small initial layer sizes."
      
      Previously we used the size of the first layer as an extra reserved
      amount of space to buffer our memory estimates. The above commit
      changed this to use the largest layer. However, this had performance
      impacts on more models than the original commit was trying to fix.
      
      There is just a heuristic without an ideal solution so this goes back
      to the historic behavior.
      
      Fixes: #10765, #10756, #10752, #10726
      3fe74fba
    • Daniel Hiltgen's avatar
      1a0cfd08
    • Jesse Gross's avatar
      ggml: Seperate tensor load from backend creation · 94ab428e
      Jesse Gross authored
      Currently, when the backend is created, the tensors are loaded at the
      same time, which is a slow operation. This separates them to be two
      steps:
       - Create backend, including enumerating tensors and memory allocation
       - Loading tensor data
      
      This allows more flexibility in managing model loading.
      94ab428e
    • Jesse Gross's avatar
      llm: Estimate projector memory correctly for Ollama engine · d7555774
      Jesse Gross authored
      The Llama engine always places vision projectors on the first GPU
      if one exists. However, the Ollama engine groups it with the output
      layer, which means the projector is only offloaded if all other layers
      are offloaded. The memory estimation code always assumes the former
      layout - this changes it to use the correct layout based on the engine.
      
      This addresses two impacts of the current behavior:
       - In multi-GPU setups, we can crash with OOM errors when we try to
         allocate memory on a full GPU while another still has space.
       - If the vision projector is large, it may prevent us from offloading
         anything when we could have fit some of the text layers.
      d7555774
    • Jesse Gross's avatar
      llm: Consistently track unassigned model data · a2cc8571
      Jesse Gross authored
      In some cases, if we fail to assign a piece of the model to a GPU then
      we lose track of this data. Although it doesn't change the memory
      allocation, it does affect the total size of the model reported by
      tools such as ollama ps (and also the percent offloaded).
      
      This makes it look like setting num_gpu isn't reflected in ollama ps,
      which isn't true but the offloading percent may appear to not change.
      
      Spreading the model across more GPUs will continue to impact the
      reported total size of the model.
      a2cc8571