1. 06 Jan, 2026 1 commit
  2. 18 Dec, 2025 1 commit
  3. 10 Dec, 2025 2 commits
  4. 04 Dec, 2025 1 commit
  5. 02 Dec, 2025 1 commit
  6. 16 Nov, 2025 1 commit
  7. 05 Nov, 2025 2 commits
  8. 30 Oct, 2025 1 commit
    • Michael Yang's avatar
      fix(cmd): unload model before removal (#12832) · ed78e127
      Michael Yang authored
      this change fixes two bugs with `ollama rm`:
      
      1. before a model is removed, it will first be stopped. this only
         happens for the first argument and skipped for all other models
      2. models are unloaded indiscriminately. this errors for cloud models
         and should be omitted
      ed78e127
  9. 26 Sep, 2025 1 commit
    • Patrick Devine's avatar
      bugfix: restore the current runOptions if loading fails in the CLI (#12402) · b04e46da
      Patrick Devine authored
      There are two bugs when using `/load <model>` for a model that doesn't exist, namely:
        1. it will not restore the current model settings if the current model is a thinking model; and
        2. it will crash is the current model is a non-thinking model
      
      This bug fix saves the current runOptions and then restores them if the model load
      doesn't happen. It also fixes the crash happening for non-thinking models.
      b04e46da
  10. 25 Sep, 2025 1 commit
  11. 23 Sep, 2025 1 commit
    • Patrick Devine's avatar
      auth: fix problems with the ollama keypairs (#12373) · 64883e3c
      Patrick Devine authored
      * auth: fix problems with the ollama keypairs
      
      This change adds several fixes including:
        - reading in the pubkey files correctly
        - fixing the push unit test to create a keypair file in a temp directory
        - not return 500 errors for normal status error
      64883e3c
  12. 17 Sep, 2025 1 commit
  13. 11 Sep, 2025 1 commit
  14. 15 Aug, 2025 1 commit
  15. 05 Aug, 2025 1 commit
    • Michael Yang's avatar
      gpt-oss (#11672) · fa7776fd
      Michael Yang authored
      
      
      * bf16
      
      * tests
      
      * gpt-oss
      
      * enable gptoss for engine
      
      * rough estimate
      
      * convert to mxfp4
      
      * handle safetensors U8
      
      * clamp glu/linear
      
      * update tokenizer
      
      * MXFP4 support
      
      This implements the Open Compute Microscaling (MX) FP4 format
      as a tensor type with backend implementations focusing
      on mulmat and mulmatid on CPU, CUDA, and Metal.
      
      * Unit tests for MXFP4 support
      
      This exercises various operations and shapes on both CPU and GPU (if detected
      on the system)
      
      * cuda graph
      
      * unit test adjustments
      
      * cuda: optimize memory access
      
      Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4
      
      * mac: fix crash on old macos versions
      
      cblas_sgemm is only supported on v13.3 and up, however bf16 is
      only supported on v14+ so we were falling back to ggml-blas and
      crashing on bf16 tensors.  Checking for the function being null
      seems to be the simplest way to condittionally avoid registering the
      backend.
      
      * server: Minimum context length for gptoss
      
      This model requires a minimum context length of 8192 to function
      effectively. Users can set higher values through all normal mechanisms
      but lower values will be silently reset.
      
      * ggml: Multiply by numParallel for gptoss sliding window
      
      When computing the graph size estimate, the context size is already
      multiplied by numParallel so estimates reflect that. However, since
      sliding window models use a smaller, fixed context size, they need
      to manually take numParallel into account.
      
      * gpt-oss integration
      
      includes harmony parser and thinking levels, etc.
      
      * fix sync
      
      * fix tests
      
      * fix lint
      
      ---------
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDevon Rifkin <drifkin@drifkin.net>
      fa7776fd
  16. 24 Jul, 2025 1 commit
  17. 22 Jul, 2025 1 commit
  18. 17 Jul, 2025 1 commit
  19. 16 Jul, 2025 1 commit
  20. 08 Jul, 2025 1 commit
  21. 09 Jun, 2025 1 commit
  22. 08 Jun, 2025 1 commit
  23. 06 Jun, 2025 2 commits
  24. 29 May, 2025 1 commit
    • Devon Rifkin's avatar
      add thinking support to the api and cli (#10584) · 5f57b0ef
      Devon Rifkin authored
      - Both `/api/generate` and `/api/chat` now accept a `"think"`
        option that allows specifying whether thinking mode should be on or
        not
      - Templates get passed this new option so, e.g., qwen3's template can
        put `/think` or `/no_think` in the system prompt depending on the
        value of the setting
      - Models' thinking support is inferred by inspecting model templates.
        The prefix and suffix the parser uses to identify thinking support is
        also automatically inferred from templates
      - Thinking control & parsing is opt-in via the API to prevent breaking
        existing API consumers. If the `"think"` option is not specified, the
        behavior is unchanged from previous versions of ollama
      - Add parsing for thinking blocks in both streaming/non-streaming mode
        in both `/generate` and `/chat`
      - Update the CLI to make use of these changes. Users can pass `--think`
        or `--think=false` to control thinking, or during an interactive
        session they can use the commands `/set think` or `/set nothink`
      - A `--hidethinking` option has also been added to the CLI. This makes
        it easy to use thinking in scripting scenarios like
        `ollama run qwen3 --think --hidethinking "my question here"` where you
        just want to see the answer but still want the benefits of thinking
        models
      5f57b0ef
  25. 21 May, 2025 1 commit
  26. 15 May, 2025 2 commits
  27. 13 May, 2025 1 commit
  28. 10 May, 2025 1 commit
  29. 08 May, 2025 1 commit
  30. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  31. 05 May, 2025 1 commit
  32. 28 Apr, 2025 1 commit
  33. 22 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      increase default context length to 4096 (#10364) · 424f6486
      Devon Rifkin authored
      * increase default context length to 4096
      
      We lower the default numParallel from 4 to 2 and use these "savings" to
      double the default context length from 2048 to 4096.
      
      We're memory neutral in cases when we previously would've used
      numParallel == 4, but we add the following mitigation to handle some
      cases where we would have previously fallen back to 1x2048 due to low
      VRAM: we decide between 2048 and 4096 using a runtime check, choosing
      2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
      purposefully don't check the available VRAM because we don't want the
      context window size to change unexpectedly based on the available VRAM.
      
      We plan on making the default even larger, but this is a relatively
      low-risk change we can make to quickly double it.
      
      * fix tests
      
      add an explicit context length so they don't get truncated. The code
      that converts -1 from being a signal for doing a runtime check isn't
      running as part of these tests.
      
      * tweak small gpu message
      
      * clarify context length default
      
      also make it actually show up in `ollama serve --help`
      424f6486
  34. 20 Apr, 2025 1 commit
  35. 16 Apr, 2025 1 commit
    • Blake Mizerany's avatar
      cmd: add retry/backoff (#10069) · 1e7f62cb
      Blake Mizerany authored
      This commit adds retry/backoff to the registry client for pull requests.
      
      Also, revert progress indication to match original client's until we can
      "get it right."
      
      Also, make WithTrace wrap existing traces instead of clobbering them.
      This allows clients to compose traces.
      1e7f62cb
  36. 14 Apr, 2025 1 commit