1. 15 Sep, 2025 1 commit
  2. 11 Sep, 2025 1 commit
  3. 10 Sep, 2025 1 commit
    • Daniel Hiltgen's avatar
      Add v12 + v13 cuda support (#12000) · 17a023f3
      Daniel Hiltgen authored
      * Add support for upcoming NVIDIA Jetsons
      
      The latest Jetsons with JetPack 7 are moving to an SBSA compatible model and
      will not require building a JetPack specific variant.
      
      * cuda: bring back dual versions
      
      This adds back dual CUDA versions for our releases,
      with v11 and v13 to cover a broad set of GPUs and
      driver versions.
      
      * win: break up native builds in build_windows.ps1
      
      * v11 build working on windows and linux
      
      * switch to cuda v12.8 not JIT
      
      * Set CUDA compression to size
      
      * enhance manual install linux docs
      17a023f3
  4. 08 Sep, 2025 1 commit
  5. 15 Aug, 2025 1 commit
  6. 14 Aug, 2025 1 commit
  7. 06 Aug, 2025 3 commits
  8. 05 Aug, 2025 1 commit
  9. 28 Jul, 2025 1 commit
  10. 22 Jul, 2025 1 commit
  11. 17 Jul, 2025 1 commit
  12. 16 Jul, 2025 1 commit
  13. 11 Jul, 2025 1 commit
  14. 08 Jul, 2025 2 commits
    • Daniel Hiltgen's avatar
      doc: add MacOS docs (#11334) · 66fb8575
      Daniel Hiltgen authored
      also removes stale model dir instructions for windows
      66fb8575
    • Daniel Hiltgen's avatar
      Reduce default parallelism to 1 (#11330) · 20c3266e
      Daniel Hiltgen authored
      The current scheduler algorithm of picking the paralellism based on available
      VRAM complicates the upcoming dynamic layer memory allocation algorithm.  This
      changes the default to 1, with the intent going forward that parallelism is
      explicit and will no longer be dynamically determined.  Removal of the dynamic
      logic will come in a follow up.
      20c3266e
  15. 07 Jul, 2025 2 commits
  16. 05 Jul, 2025 1 commit
  17. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  18. 18 Jun, 2025 1 commit
  19. 07 Jun, 2025 2 commits
  20. 06 Jun, 2025 1 commit
  21. 04 Jun, 2025 1 commit
  22. 29 May, 2025 1 commit
    • Devon Rifkin's avatar
      add thinking support to the api and cli (#10584) · 5f57b0ef
      Devon Rifkin authored
      - Both `/api/generate` and `/api/chat` now accept a `"think"`
        option that allows specifying whether thinking mode should be on or
        not
      - Templates get passed this new option so, e.g., qwen3's template can
        put `/think` or `/no_think` in the system prompt depending on the
        value of the setting
      - Models' thinking support is inferred by inspecting model templates.
        The prefix and suffix the parser uses to identify thinking support is
        also automatically inferred from templates
      - Thinking control & parsing is opt-in via the API to prevent breaking
        existing API consumers. If the `"think"` option is not specified, the
        behavior is unchanged from previous versions of ollama
      - Add parsing for thinking blocks in both streaming/non-streaming mode
        in both `/generate` and `/chat`
      - Update the CLI to make use of these changes. Users can pass `--think`
        or `--think=false` to control thinking, or during an interactive
        session they can use the commands `/se...
      5f57b0ef
  23. 24 May, 2025 1 commit
  24. 13 May, 2025 1 commit
  25. 12 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Follow up to #10363 (#10647) · 9d6df908
      Daniel Hiltgen authored
      The quantization PR didn't block all unsupported file types,
      which this PR fixes.  It also updates the API docs to reflect
      the now reduced set of supported types.
      9d6df908
  26. 08 May, 2025 1 commit
  27. 07 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      remove cuda v11 (#10569) · fa393554
      Daniel Hiltgen authored
      This reduces the size of our Windows installer payloads by ~256M by dropping
      support for nvidia drivers older than Feb 2023.  Hardware support is unchanged.
      
      Linux default bundle sizes are reduced by ~600M to 1G.
      fa393554
  28. 05 May, 2025 1 commit
  29. 29 Apr, 2025 1 commit
  30. 28 Apr, 2025 1 commit
  31. 22 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      increase default context length to 4096 (#10364) · 424f6486
      Devon Rifkin authored
      * increase default context length to 4096
      
      We lower the default numParallel from 4 to 2 and use these "savings" to
      double the default context length from 2048 to 4096.
      
      We're memory neutral in cases when we previously would've used
      numParallel == 4, but we add the following mitigation to handle some
      cases where we would have previously fallen back to 1x2048 due to low
      VRAM: we decide between 2048 and 4096 using a runtime check, choosing
      2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
      purposefully don't check the available VRAM because we don't want the
      context window size to change unexpectedly based on the available VRAM.
      
      We plan on making the default even larger, but this is a relatively
      low-risk change we can make to quickly double it.
      
      * fix tests
      
      add an explicit context length so they don't get truncated. The code
      that converts -1 from being a signal for doing a runtime check isn't
      running as part of these tests.
      
      * tweak small gpu message
      
      * clarify context length default
      
      also make it actually show up in `ollama serve --help`
      424f6486
  32. 15 Apr, 2025 2 commits
  33. 08 Apr, 2025 1 commit
  34. 01 Apr, 2025 1 commit