1. 29 May, 2025 1 commit
    • Devon Rifkin's avatar
      add thinking support to the api and cli (#10584) · 5f57b0ef
      Devon Rifkin authored
      - Both `/api/generate` and `/api/chat` now accept a `"think"`
        option that allows specifying whether thinking mode should be on or
        not
      - Templates get passed this new option so, e.g., qwen3's template can
        put `/think` or `/no_think` in the system prompt depending on the
        value of the setting
      - Models' thinking support is inferred by inspecting model templates.
        The prefix and suffix the parser uses to identify thinking support is
        also automatically inferred from templates
      - Thinking control & parsing is opt-in via the API to prevent breaking
        existing API consumers. If the `"think"` option is not specified, the
        behavior is unchanged from previous versions of ollama
      - Add parsing for thinking blocks in both streaming/non-streaming mode
        in both `/generate` and `/chat`
      - Update the CLI to make use of these changes. Users can pass `--think`
        or `--think=false` to control thinking, or during an interactive
        session they can use the commands `/se...
      5f57b0ef
  2. 13 May, 2025 1 commit
  3. 10 May, 2025 1 commit
  4. 20 Apr, 2025 1 commit
  5. 15 Mar, 2025 1 commit
    • Patrick Devine's avatar
      fix: correctly save in interactive mode (#9788) · 2c8b4846
      Patrick Devine authored
      This fixes the case where a FROM line in previous modelfile points to a
      file which may/may not be present in a different ollama instance. We
      shouldn't be relying on the filename though and instead just check if
      the FROM line was instead a valid model name and point to that instead.
      2c8b4846
  6. 13 Mar, 2025 1 commit
  7. 12 Mar, 2025 1 commit
  8. 01 Jan, 2025 1 commit
  9. 22 Dec, 2024 1 commit
  10. 26 Nov, 2024 1 commit
  11. 21 Nov, 2024 1 commit
  12. 18 Oct, 2024 1 commit
  13. 10 Oct, 2024 1 commit
    • Jesse Gross's avatar
      cli: Send all images in conversation history · 7fe39025
      Jesse Gross authored
      Currently the CLI only sends images from the most recent image-
      containing message. This prevents doing things like sending
      one message with an image and then a follow message with a
      second image and asking for comparision based on additional
      information not present in any text that was output.
      
      It's possible that some models have a problem with this but the
      CLI is not the right place to do this since any adjustments are
      model-specific and should affect all clients.
      
      Both llava:34b and minicpm-v do reasonable things with multiple
      images in the history.
      7fe39025
  14. 11 Sep, 2024 2 commits
  15. 02 Aug, 2024 1 commit
  16. 27 Jul, 2024 1 commit
  17. 26 Jul, 2024 2 commits
  18. 22 Jul, 2024 1 commit
  19. 14 Jul, 2024 1 commit
  20. 28 Jun, 2024 1 commit
  21. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      cmd: defer stating model info until necessary (#5248) · 2aa91a93
      Blake Mizerany authored
      This commit changes the 'ollama run' command to defer fetching model
      information until it really needs it. That is, when in interactive mode.
      
      It also removes one such case where the model information is fetch in
      duplicate, just before calling generateInteractive and then again, first
      thing, in generateInteractive.
      
      This positively impacts the performance of the command:
      
          ; time ./before run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./before run llama3 'hi'  0.02s user 0.01s system 2% cpu 1.168 total
          ; time ./before run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./before run llama3 'hi'  0.02s user 0.01s system 2% cpu 1.220 total
          ; time ./before run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./before run llama3 'hi'  0.02s user 0.01s system 2% cpu 1.217 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.02s user 0.01s system 4% cpu 0.652 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.01s user 0.01s system 5% cpu 0.498 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with or would you like to chat?
      
          ./after run llama3 'hi'  0.01s user 0.01s system 3% cpu 0.479 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.02s user 0.01s system 5% cpu 0.507 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.02s user 0.01s system 5% cpu 0.507 total
      2aa91a93
  22. 04 Jun, 2024 1 commit
  23. 24 May, 2024 1 commit
  24. 21 May, 2024 1 commit
  25. 18 May, 2024 1 commit
  26. 14 May, 2024 3 commits
  27. 07 May, 2024 1 commit
  28. 01 May, 2024 1 commit
  29. 23 Apr, 2024 1 commit
  30. 22 Apr, 2024 1 commit
  31. 29 Mar, 2024 1 commit
  32. 26 Mar, 2024 1 commit
  33. 20 Feb, 2024 1 commit
  34. 16 Feb, 2024 1 commit
  35. 12 Feb, 2024 1 commit
  36. 02 Feb, 2024 1 commit