1. 16 Dec, 2025 1 commit
    • Bruce MacDonald's avatar
      types: ConfigV2 and RootFS (#13504) · 45c47393
      Bruce MacDonald authored
      Refactored the ConfigV2 and RootFS types from server/images.go to a new types/model/config.go file under the model package. Updated all references to use model.ConfigV2 and model.RootFS. This allows for use in other projects without worrying about compiling the c code in the llama package.
      45c47393
  2. 11 Dec, 2025 2 commits
  3. 08 Dec, 2025 1 commit
  4. 05 Dec, 2025 1 commit
  5. 20 Nov, 2025 1 commit
  6. 11 Nov, 2025 1 commit
    • Baptiste Jamin's avatar
      server: add logprobs and top_logprobs support to Ollama's API (#12899) · 59241c5b
      Baptiste Jamin authored
      
      
      Adds logprobs support to Ollama's API including support for Ollama's
      OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter
      in the API, Ollama will return the log probabilities for each token generated.
      'top_logprobs', an integer value can also be specified up to the value 20.
      When specified, the API will also provide the number of most likely tokens to
      return at each token position
      Co-authored-by: default avatarBaptiste Jamin <baptiste@crisp.chat>
      59241c5b
  7. 05 Nov, 2025 1 commit
  8. 29 Oct, 2025 1 commit
  9. 28 Oct, 2025 1 commit
  10. 27 Oct, 2025 1 commit
    • nicole pardal's avatar
      server: Consolidate embedding truncation in runner (#12730) · 5d347f6d
      nicole pardal authored
      Currently, checking the length of prompts for embeddings to ensure
      they fit in the context window (and possible truncation) occurs in
      two places - the Ollama server and runner. This can lead to
      inconsistencies in both the checks and reported number of tokens
      processed. Since we have to do this processing in the runner, this
      consolidates all of the logic there.
      5d347f6d
  11. 25 Oct, 2025 1 commit
  12. 22 Oct, 2025 1 commit
  13. 16 Oct, 2025 1 commit
  14. 11 Oct, 2025 2 commits
  15. 10 Oct, 2025 1 commit
  16. 09 Oct, 2025 3 commits
  17. 08 Oct, 2025 1 commit
  18. 05 Oct, 2025 1 commit
  19. 01 Oct, 2025 2 commits
    • Daniel Hiltgen's avatar
      Use runners for GPU discovery (#12090) · bc8909fb
      Daniel Hiltgen authored
      This revamps how we discover GPUs in the system by leveraging the Ollama
      runner.  This should eliminate inconsistency between our GPU discovery and the
      runners capabilities at runtime, particularly for cases where we try to filter
      out unsupported GPUs.  Now the runner does that implicitly based on the actual
      device list.  In some cases free VRAM reporting can be unreliable which can
      leaad to scheduling mistakes, so this also includes a patch to leverage more
      reliable VRAM reporting libraries if available.
      
      Automatic workarounds have been removed as only one GPU leveraged this, which
      is now documented. This GPU will soon fall off the support matrix with the next
      ROCm bump.
      
      Additional cleanup of the scheduler and discovery packages can be done in the
      future once we have switched on the new memory management code, and removed
      support for the llama runner.
      bc8909fb
    • Michael Yang's avatar
      fix keep alive · 35ac4eb1
      Michael Yang authored
      this reference to keep alive was missed in #12041 so chat has a
      diffferent behaviour than generate
      35ac4eb1
  20. 23 Sep, 2025 1 commit
    • Patrick Devine's avatar
      auth: fix problems with the ollama keypairs (#12373) · 64883e3c
      Patrick Devine authored
      * auth: fix problems with the ollama keypairs
      
      This change adds several fixes including:
        - reading in the pubkey files correctly
        - fixing the push unit test to create a keypair file in a temp directory
        - not return 500 errors for normal status error
      64883e3c
  21. 18 Sep, 2025 3 commits
  22. 17 Sep, 2025 2 commits
  23. 15 Sep, 2025 3 commits
    • Michael Yang's avatar
      model: implement bert in ollama engine (#9080) · 3f6642f6
      Michael Yang authored
      * fix truncate
      
      * s/SentencePieceModel/SentencePiece/
      
      * bert
      
      * wordpiece
      
      * refactor pooling
      
      * more tokenizers
      
      * normalize embeddings
      3f6642f6
    • Devon Rifkin's avatar
      address comments · 472feec2
      Devon Rifkin authored
      472feec2
    • Devon Rifkin's avatar
      add qwen3-coder tool support · 47991940
      Devon Rifkin authored
      The format qwen3-coder uses is relatively unique, both in rendering and
      in parsing. To implement parsing, I wrote a custom parser in similar
      style to harmony. For the rendering, I found that the logic would be
      much more difficult to follow in a template, so I introduced the concept
      of a built-in renderer that uses go code, rather than a template to
      generate prompts.
      
      I set us up for future built-in parsers and renderers by making it so
      they can be specified in a Modelfile like so:
      
      ```
      RENDERER "qwen3-coder"
      PARSER "qwen3-coder"
      ```
      
      These need to be provided explicitly because the architecture alone is
      not enough to understand what format the model expects to receive, and
      what format we expect it to output (e.g., qwen3-coder is `qwen3moe`,
      which includes other qwen3-family models as well)
      
      I haven't converted harmony to be one of these "built-ins" yet, since
      some of it is in flux with the changes @ParthSareen has been making to
      move harmony to the runner. It is likely that many other built-ins will
      need to move to the runner as well, but I'm able to slightly defer that
      decision since qwen3-coder doesn't have thinking (and therefore doesn't
      need to be in the runner to make structured outputs work). I expect to
      unify harmony with this approach very soon.
      
      Whether a particular model supports tools or thinking was previously
      inferred from templates, but without a template we now also use the
      parser itself to declare what it supports. If we have future models that
      re-use the same parsing format, but have different capabilities, we'll
      want to parameterize them and give them different names to be specified
      as a `PARSER`.
      
      Misc changes:
      
      - I worked on the renderer by diffing outputs from the reference
        implementation and ours. To make it easier to do this, I extended
        <https://github.com/ollama/ollama/pull/11875> to also support
        returning the prompt via the openai compat layer
      47991940
  24. 12 Sep, 2025 2 commits
  25. 11 Sep, 2025 1 commit
  26. 10 Sep, 2025 1 commit
  27. 08 Sep, 2025 1 commit
  28. 27 Aug, 2025 1 commit
  29. 22 Aug, 2025 1 commit