1. 18 Dec, 2025 1 commit
  2. 09 Dec, 2025 1 commit
  3. 08 Nov, 2025 1 commit
  4. 15 Sep, 2025 1 commit
    • Devon Rifkin's avatar
      add qwen3-coder tool support · 47991940
      Devon Rifkin authored
      The format qwen3-coder uses is relatively unique, both in rendering and
      in parsing. To implement parsing, I wrote a custom parser in similar
      style to harmony. For the rendering, I found that the logic would be
      much more difficult to follow in a template, so I introduced the concept
      of a built-in renderer that uses go code, rather than a template to
      generate prompts.
      
      I set us up for future built-in parsers and renderers by making it so
      they can be specified in a Modelfile like so:
      
      ```
      RENDERER "qwen3-coder"
      PARSER "qwen3-coder"
      ```
      
      These need to be provided explicitly because the architecture alone is
      not enough to understand what format the model expects to receive, and
      what format we expect it to output (e.g., qwen3-coder is `qwen3moe`,
      which includes other qwen3-family models as well)
      
      I haven't converted harmony to be one of these "built-ins" yet, since
      some of it is in flux with the changes @ParthSareen has been making to
      move harmony to the runner. It is likely that many other built-ins will
      need to move to the runner as well, but I'm able to slightly defer that
      decision since qwen3-coder doesn't have thinking (and therefore doesn't
      need to be in the runner to make structured outputs work). I expect to
      unify harmony with this approach very soon.
      
      Whether a particular model supports tools or thinking was previously
      inferred from templates, but without a template we now also use the
      parser itself to declare what it supports. If we have future models that
      re-use the same parsing format, but have different capabilities, we'll
      want to parameterize them and give them different names to be specified
      as a `PARSER`.
      
      Misc changes:
      
      - I worked on the renderer by diffing outputs from the reference
        implementation and ours. To make it easier to do this, I extended
        <https://github.com/ollama/ollama/pull/11875> to also support
        returning the prompt via the openai compat layer
      47991940
  5. 05 Sep, 2025 1 commit
  6. 11 Jun, 2025 1 commit
  7. 08 May, 2025 1 commit
  8. 05 May, 2025 2 commits
  9. 07 Apr, 2025 1 commit
  10. 03 Apr, 2025 1 commit
    • Bruce MacDonald's avatar
      model: support for mistral-small in the ollama runner · 6bd0a983
      Bruce MacDonald authored
      Mistral is a popular research lab making open source models. This updates
      the forward pass of llama architecture models to support both llama models
      and mistral models by accounting for additional metadata present in mistral
      models, and finding the correct dimensions for the output projection.
      6bd0a983
  11. 21 Mar, 2025 1 commit
  12. 20 Mar, 2025 1 commit
  13. 16 Jan, 2025 1 commit
  14. 15 Jan, 2025 1 commit
  15. 11 Jan, 2025 1 commit
  16. 08 Jan, 2025 1 commit
  17. 01 Jan, 2025 1 commit
  18. 14 Nov, 2024 1 commit
  19. 27 Jun, 2024 2 commits
  20. 13 Jun, 2024 2 commits
  21. 05 Jun, 2024 1 commit
  22. 20 May, 2024 1 commit
  23. 07 May, 2024 1 commit
  24. 01 May, 2024 10 commits
  25. 25 Jan, 2024 1 commit
  26. 18 Jan, 2024 1 commit
  27. 05 Dec, 2023 1 commit
  28. 16 Oct, 2023 1 commit