1. 10 May, 2025 1 commit
  2. 08 May, 2025 1 commit
  3. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  4. 05 May, 2025 1 commit
  5. 28 Apr, 2025 1 commit
  6. 22 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      increase default context length to 4096 (#10364) · 424f6486
      Devon Rifkin authored
      * increase default context length to 4096
      
      We lower the default numParallel from 4 to 2 and use these "savings" to
      double the default context length from 2048 to 4096.
      
      We're memory neutral in cases when we previously would've used
      numParallel == 4, but we add the following mitigation to handle some
      cases where we would have previously fallen back to 1x2048 due to low
      VRAM: we decide between 2048 and 4096 using a runtime check, choosing
      2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
      purposefully don't check the available VRAM because we don't want the
      context window size to change unexpectedly based on the available VRAM.
      
      We plan on making the default even larger, but this is a relatively
      low-risk change we can make to quickly double it.
      
      * fix tests
      
      add an explicit context length so they don't get truncated. The code
      that converts -1 from being a signal for doing a runtime check isn't
      running as part of these tests.
      
      * tweak small gpu message
      
      * clarify context length default
      
      also make it actually show up in `ollama serve --help`
      424f6486
  7. 20 Apr, 2025 1 commit
  8. 16 Apr, 2025 1 commit
    • Blake Mizerany's avatar
      cmd: add retry/backoff (#10069) · 1e7f62cb
      Blake Mizerany authored
      This commit adds retry/backoff to the registry client for pull requests.
      
      Also, revert progress indication to match original client's until we can
      "get it right."
      
      Also, make WithTrace wrap existing traces instead of clobbering them.
      This allows clients to compose traces.
      1e7f62cb
  9. 14 Apr, 2025 1 commit
  10. 08 Apr, 2025 1 commit
  11. 02 Apr, 2025 1 commit
  12. 01 Apr, 2025 1 commit
  13. 21 Mar, 2025 1 commit
  14. 15 Mar, 2025 1 commit
    • Patrick Devine's avatar
      fix: correctly save in interactive mode (#9788) · 2c8b4846
      Patrick Devine authored
      This fixes the case where a FROM line in previous modelfile points to a
      file which may/may not be present in a different ollama instance. We
      shouldn't be relying on the filename though and instead just check if
      the FROM line was instead a valid model name and point to that instead.
      2c8b4846
  15. 13 Mar, 2025 1 commit
  16. 12 Mar, 2025 1 commit
  17. 04 Mar, 2025 2 commits
    • Michael Yang's avatar
      ml/backend/ggml: consolidate system info logging · 05a01fde
      Michael Yang authored
      - output backend system info when initializing the backend. this ensures
        this information is always present without needing to be called
        explicitly
      - convert to structured logging
      - enumerate devices rather than backends since devices are ordered
      - track device indices grouped by device name
      05a01fde
    • Daniel Hiltgen's avatar
      New engine: vision models and auto-fallback (#9113) · 1fdb351c
      Daniel Hiltgen authored
      * Include unified vision layers in memory prediction
      
      For newer vision models with a single gguf, include
      the projection estimates.
      
      * Adjust CLI to handle both styles of vision model metadata
      
      * Wire up new tokenizers for new engine
      
      If we're loading the new engine, utilize the new model
      text processor instead of calling into cgo wrappers for
      llama.cpp.  This also cleans up some tech debt from the
      older tokenization flow for the C++ server which was
      no longer used.
      
      This also adjusts the grammar handling logic to pass
      through to the new engine instead of utilizing the cgo
      schema to grammar call.
      
      * Lay foundation for auto selection of new engine
      1fdb351c
  18. 03 Mar, 2025 1 commit
  19. 19 Feb, 2025 1 commit
  20. 14 Feb, 2025 1 commit
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03
  21. 16 Jan, 2025 1 commit
  22. 11 Jan, 2025 1 commit
  23. 09 Jan, 2025 1 commit
  24. 01 Jan, 2025 1 commit
  25. 22 Dec, 2024 1 commit
  26. 11 Dec, 2024 1 commit
  27. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  28. 06 Dec, 2024 1 commit
  29. 05 Dec, 2024 2 commits
  30. 03 Dec, 2024 1 commit
  31. 30 Nov, 2024 1 commit
  32. 26 Nov, 2024 1 commit
  33. 25 Nov, 2024 1 commit
  34. 22 Nov, 2024 2 commits
    • Bruce MacDonald's avatar
      server: remove out of date anonymous access check (#7785) · 7b5585b9
      Bruce MacDonald authored
      In the past the ollama.com server would return a JWT that contained
      information about the user being authenticated. This was used to return
      different error messages to the user. This is no longer possible since the
      token used to authenticate does not contain information about the user
      anymore. Removing this code that no longer works.
      
      Follow up changes will improve the error messages returned here, but good to
      clean up first.
      7b5585b9
    • Daniel Hiltgen's avatar
      Be quiet when redirecting output (#7360) · d88972ea
      Daniel Hiltgen authored
      This avoids emitting the progress indicators to stderr, and the interactive
      prompts to the output file or pipe.  Running "ollama run model > out.txt"
      now exits immediately, and "echo hello | ollama run model > out.txt"
      produces zero stderr output and a typical response in out.txt
      d88972ea
  35. 21 Nov, 2024 1 commit
  36. 14 Nov, 2024 1 commit
  37. 25 Oct, 2024 1 commit