1. 17 Dec, 2025 1 commit
  2. 13 Nov, 2025 1 commit
  3. 11 Nov, 2025 1 commit
    • Baptiste Jamin's avatar
      server: add logprobs and top_logprobs support to Ollama's API (#12899) · 59241c5b
      Baptiste Jamin authored
      
      
      Adds logprobs support to Ollama's API including support for Ollama's
      OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter
      in the API, Ollama will return the log probabilities for each token generated.
      'top_logprobs', an integer value can also be specified up to the value 20.
      When specified, the API will also provide the number of most likely tokens to
      return at each token position
      Co-authored-by: default avatarBaptiste Jamin <baptiste@crisp.chat>
      59241c5b
  4. 06 Nov, 2025 1 commit
  5. 05 Nov, 2025 1 commit
  6. 15 Oct, 2025 1 commit
  7. 13 Oct, 2025 1 commit
    • Grace's avatar
      Qwen3VL Cloud Parser and Renderer (#12526) · 05982a95
      Grace authored
      
      
      * working (other than tool call is the incorrect order) for tool calls and tools
      
      * Tests work, other than image tags (tests do not go through server) and tools (not in the correct order, but contents are the same)
      
      * testing for qwen3vl parser - toolparser is working
      
      * made changes to JSON tool parser, wraps the TollCallFunction with a TollCall object
      
      * Working parser for thinking models - assumes state of thinking, emits unambiguous content in thinking, does not call tool call in thinking
      
      * changed the parser to start with collecting content
      
      * thinking prefill
      
      * add hasThinkingSupport parameter to parser
      
      * qwen3-vl -> qwen3-vl-instruct for renderer/parser
      
      * Add hasThinkingSupport=false to QwenVLParser
      
      ---------
      Co-authored-by: default avatarDevon Rifkin <drifkin@drifkin.net>
      05982a95
  8. 11 Oct, 2025 1 commit
  9. 09 Oct, 2025 2 commits
  10. 08 Oct, 2025 1 commit
  11. 23 Sep, 2025 1 commit
    • Patrick Devine's avatar
      auth: fix problems with the ollama keypairs (#12373) · 64883e3c
      Patrick Devine authored
      * auth: fix problems with the ollama keypairs
      
      This change adds several fixes including:
        - reading in the pubkey files correctly
        - fixing the push unit test to create a keypair file in a temp directory
        - not return 500 errors for normal status error
      64883e3c
  12. 17 Sep, 2025 1 commit
  13. 15 Sep, 2025 1 commit
    • Devon Rifkin's avatar
      add qwen3-coder tool support · 47991940
      Devon Rifkin authored
      The format qwen3-coder uses is relatively unique, both in rendering and
      in parsing. To implement parsing, I wrote a custom parser in similar
      style to harmony. For the rendering, I found that the logic would be
      much more difficult to follow in a template, so I introduced the concept
      of a built-in renderer that uses go code, rather than a template to
      generate prompts.
      
      I set us up for future built-in parsers and renderers by making it so
      they can be specified in a Modelfile like so:
      
      ```
      RENDERER "qwen3-coder"
      PARSER "qwen3-coder"
      ```
      
      These need to be provided explicitly because the architecture alone is
      not enough to understand what format the model expects to receive, and
      what format we expect it to output (e.g., qwen3-coder is `qwen3moe`,
      which includes other qwen3-family models as well)
      
      I haven't converted harmony to be one of these "built-ins" yet, since
      some of it is in flux with the changes @ParthSareen has been making to
      move harmony to the runner. It is likely that many other built-ins will
      need to move to the runner as well, but I'm able to slightly defer that
      decision since qwen3-coder doesn't have thinking (and therefore doesn't
      need to be in the runner to make structured outputs work). I expect to
      unify harmony with this approach very soon.
      
      Whether a particular model supports tools or thinking was previously
      inferred from templates, but without a template we now also use the
      parser itself to declare what it supports. If we have future models that
      re-use the same parsing format, but have different capabilities, we'll
      want to parameterize them and give them different names to be specified
      as a `PARSER`.
      
      Misc changes:
      
      - I worked on the renderer by diffing outputs from the reference
        implementation and ours. To make it easier to do this, I extended
        <https://github.com/ollama/ollama/pull/11875> to also support
        returning the prompt via the openai compat layer
      47991940
  14. 11 Sep, 2025 1 commit
  15. 27 Aug, 2025 1 commit
  16. 22 Aug, 2025 1 commit
  17. 15 Aug, 2025 1 commit
  18. 12 Aug, 2025 1 commit
  19. 05 Aug, 2025 2 commits
    • Devon Rifkin's avatar
      tools: support anyOf types · 30f8a68c
      Devon Rifkin authored
      afaik gpt-oss is the first model that meaningfully transforms tool
      function definitions in its template. We found that relatively common
      definitions that include `anyOf` were not working because the template
      was assuming that types were always defined via a `type` field.
      
      anyOf allows for fully recursive types, so I exposed a
      `toTypeScriptType()` function to handle this recursive logic in go and
      keep the templates cleaner. The gpt-oss templates will need to be
      updated to use this.
      
      We should keep building out our function definition support to more
      fully support the parts of json schema that make sense for this use
      case, but in the meantime this will unblock some users (e.g., zed's
      ollama integration w/ gpt-oss). Probably the most urgent is proper array
      support
      30f8a68c
    • Michael Yang's avatar
      gpt-oss (#11672) · fa7776fd
      Michael Yang authored
      
      
      * bf16
      
      * tests
      
      * gpt-oss
      
      * enable gptoss for engine
      
      * rough estimate
      
      * convert to mxfp4
      
      * handle safetensors U8
      
      * clamp glu/linear
      
      * update tokenizer
      
      * MXFP4 support
      
      This implements the Open Compute Microscaling (MX) FP4 format
      as a tensor type with backend implementations focusing
      on mulmat and mulmatid on CPU, CUDA, and Metal.
      
      * Unit tests for MXFP4 support
      
      This exercises various operations and shapes on both CPU and GPU (if detected
      on the system)
      
      * cuda graph
      
      * unit test adjustments
      
      * cuda: optimize memory access
      
      Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4
      
      * mac: fix crash on old macos versions
      
      cblas_sgemm is only supported on v13.3 and up, however bf16 is
      only supported on v14+ so we were falling back to ggml-blas and
      crashing on bf16 tensors.  Checking for the function being null
      seems to be the simplest way to condittionally avoid registering the
      backend.
      
      * server: Minimum context length for gptoss
      
      This model requires a minimum context length of 8192 to function
      effectively. Users can set higher values through all normal mechanisms
      but lower values will be silently reset.
      
      * ggml: Multiply by numParallel for gptoss sliding window
      
      When computing the graph size estimate, the context size is already
      multiplied by numParallel so estimates reflect that. However, since
      sliding window models use a smaller, fixed context size, they need
      to manually take numParallel into account.
      
      * gpt-oss integration
      
      includes harmony parser and thinking levels, etc.
      
      * fix sync
      
      * fix tests
      
      * fix lint
      
      ---------
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDevon Rifkin <drifkin@drifkin.net>
      fa7776fd
  20. 08 Jul, 2025 1 commit
  21. 07 Jul, 2025 1 commit
  22. 07 Jun, 2025 1 commit
  23. 04 Jun, 2025 1 commit
  24. 29 May, 2025 1 commit
    • Devon Rifkin's avatar
      add thinking support to the api and cli (#10584) · 5f57b0ef
      Devon Rifkin authored
      - Both `/api/generate` and `/api/chat` now accept a `"think"`
        option that allows specifying whether thinking mode should be on or
        not
      - Templates get passed this new option so, e.g., qwen3's template can
        put `/think` or `/no_think` in the system prompt depending on the
        value of the setting
      - Models' thinking support is inferred by inspecting model templates.
        The prefix and suffix the parser uses to identify thinking support is
        also automatically inferred from templates
      - Thinking control & parsing is opt-in via the API to prevent breaking
        existing API consumers. If the `"think"` option is not specified, the
        behavior is unchanged from previous versions of ollama
      - Add parsing for thinking blocks in both streaming/non-streaming mode
        in both `/generate` and `/chat`
      - Update the CLI to make use of these changes. Users can pass `--think`
        or `--think=false` to control thinking, or during an interactive
        session they can use the commands `/set think` or `/set nothink`
      - A `--hidethinking` option has also been added to the CLI. This makes
        it easy to use thinking in scripting scenarios like
        `ollama run qwen3 --think --hidethinking "my question here"` where you
        just want to see the answer but still want the benefits of thinking
        models
      5f57b0ef
  25. 08 May, 2025 1 commit
  26. 07 May, 2025 1 commit
  27. 05 May, 2025 1 commit
  28. 24 Apr, 2025 1 commit
  29. 10 Apr, 2025 1 commit
  30. 08 Apr, 2025 1 commit
  31. 07 Apr, 2025 1 commit
  32. 02 Apr, 2025 1 commit
  33. 01 Apr, 2025 1 commit
  34. 13 Mar, 2025 1 commit
  35. 05 Mar, 2025 1 commit
    • Blake Mizerany's avatar
      server/internal/registry: take over pulls from server package (#9485) · e2252d0f
      Blake Mizerany authored
      This commit replaces the old pull implementation in the server package
      with the new, faster, more robust pull implementation in the registry
      package.
      
      The new endpoint, and now the remove endpoint too, are behind the
      feature gate "client2" enabled only by setting the OLLAMA_EXPERIMENT
      environment variable include "client2".
      
      Currently, the progress indication is wired to perform the same as the
      previous implementation to avoid making changes to the CLI, and because
      the status reports happen at the start of the download, and the end of
      the write to disk, the progress indication is not as smooth as it could
      be. This is a known issue and will be addressed in a future change.
      
      This implementation may be ~0.5-1.0% slower in rare cases, depending on
      network and disk speed, but is generally MUCH faster and more robust
      than the its predecessor in all other cases.
      e2252d0f
  36. 24 Feb, 2025 1 commit
  37. 08 Jan, 2025 1 commit
  38. 03 Jan, 2025 1 commit
    • Bruce MacDonald's avatar
      api: remove unused create fields · 29a8975c
      Bruce MacDonald authored
      These fields are deprecated, but specifying them will not do anything. Removing them as the other deprecated fields will still work, but these do not, so they dont match our existing pattern.
      29a8975c