1. 05 Aug, 2025 1 commit
  2. 20 Jun, 2025 1 commit
  3. 18 Jun, 2025 1 commit
  4. 12 Jun, 2025 1 commit
  5. 06 Jun, 2025 1 commit
  6. 29 May, 2025 1 commit
    • Devon Rifkin's avatar
      add thinking support to the api and cli (#10584) · 5f57b0ef
      Devon Rifkin authored
      - Both `/api/generate` and `/api/chat` now accept a `"think"`
        option that allows specifying whether thinking mode should be on or
        not
      - Templates get passed this new option so, e.g., qwen3's template can
        put `/think` or `/no_think` in the system prompt depending on the
        value of the setting
      - Models' thinking support is inferred by inspecting model templates.
        The prefix and suffix the parser uses to identify thinking support is
        also automatically inferred from templates
      - Thinking control & parsing is opt-in via the API to prevent breaking
        existing API consumers. If the `"think"` option is not specified, the
        behavior is unchanged from previous versions of ollama
      - Add parsing for thinking blocks in both streaming/non-streaming mode
        in both `/generate` and `/chat`
      - Update the CLI to make use of these changes. Users can pass `--think`
        or `--think=false` to control thinking, or during an interactive
        session they can use the commands `/set think` or `/set nothink`
      - A `--hidethinking` option has also been added to the CLI. This makes
        it easy to use thinking in scripting scenarios like
        `ollama run qwen3 --think --hidethinking "my question here"` where you
        just want to see the answer but still want the benefits of thinking
        models
      5f57b0ef
  7. 19 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Seperate tensor load from backend creation · 94ab428e
      Jesse Gross authored
      Currently, when the backend is created, the tensors are loaded at the
      same time, which is a slow operation. This separates them to be two
      steps:
       - Create backend, including enumerating tensors and memory allocation
       - Loading tensor data
      
      This allows more flexibility in managing model loading.
      94ab428e
  8. 01 May, 2025 1 commit
  9. 25 Apr, 2025 1 commit
  10. 02 Apr, 2025 1 commit
  11. 01 Apr, 2025 1 commit
  12. 28 Mar, 2025 1 commit
  13. 14 Feb, 2025 1 commit
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  14. 01 Jan, 2025 1 commit
  15. 11 Dec, 2024 1 commit
  16. 25 Nov, 2024 1 commit
    • Blake Mizerany's avatar
      server: fix Transport override (#7834) · 2b7ed61c
      Blake Mizerany authored
      This changes makeRequest to update the http client Transport if and only
      if testMakeRequestDialContext is set. This is to avoid overriding the
      default Transport when testMakeRequestDialContext is nil, which broke
      existing behavior, included proxies, timeouts, and other behaviors.
      
      Fixes #7829
      Fixes #7788
      2b7ed61c
  17. 22 Nov, 2024 1 commit
    • Bruce MacDonald's avatar
      server: remove out of date anonymous access check (#7785) · 7b5585b9
      Bruce MacDonald authored
      In the past the ollama.com server would return a JWT that contained
      information about the user being authenticated. This was used to return
      different error messages to the user. This is no longer possible since the
      token used to authenticate does not contain information about the user
      anymore. Removing this code that no longer works.
      
      Follow up changes will improve the error messages returned here, but good to
      clean up first.
      7b5585b9
  18. 19 Nov, 2024 1 commit
    • Blake Mizerany's avatar
      server: allow mixed-case model names on push, pull, cp, and create (#7676) · 4b8a2e34
      Blake Mizerany authored
      This change allows for mixed-case model names to be pushed, pulled,
      copied, and created, which was previously disallowed because the Ollama
      registry was backed by a Docker registry that enforced a naming
      convention that disallowed mixed-case names, which is no longer the
      case.
      
      This does not break existing, intended, behaviors.
      
      Also, make TestCase test a story of creating, updating, pulling, and
      copying a model with case variations, ensuring the model's manifest is
      updated correctly, and not duplicated across different files with
      different case variations.
      4b8a2e34
  19. 05 Nov, 2024 1 commit
  20. 08 Oct, 2024 1 commit
    • Jeffrey Morgan's avatar
      Re-introduce the `llama` package (#5034) · 96efd905
      Jeffrey Morgan authored
      
      
      * Re-introduce the llama package
      
      This PR brings back the llama package, making it possible to call llama.cpp and
      ggml APIs from Go directly via CGo. This has a few advantages:
      
      - C APIs can be called directly from Go without needing to use the previous
        "server" REST API
      - On macOS and for CPU builds on Linux and Windows, Ollama can be built without
        a go generate ./... step, making it easy to get up and running to hack on
        parts of Ollama that don't require fast inference
      - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
        takes <5 min on a fast CPU)
      - No git submodule making it easier to clone and build from source
      
      This is a big PR, but much of it is vendor code except for:
      
      - llama.go CGo bindings
      - example/: a simple example of running inference
      - runner/: a subprocess server designed to replace the llm/ext_server package
      - Makefile an as minimal as possible Makefile to build the runner package for
        different targets (cpu, avx, avx2, cuda, rocm)
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      
      * cache: Clear old KV cache entries when evicting a slot
      
      When forking a cache entry, if no empty slots are available we
      evict the least recently used one and copy over the KV entries
      from the closest match. However, this copy does not overwrite
      existing values but only adds new ones. Therefore, we need to
      clear the old slot first.
      
      This change fixes two issues:
       - The KV cache fills up and runs out of space even though we think
         we are managing it correctly
       - Performance gets worse over time as we use new cache entries that
         are not hot in the processor caches
      
      * doc: explain golang objc linker warning (#6830)
      
      * llama: gather transitive dependencies for rocm for dist packaging (#6848)
      
      * Refine go server makefiles to be more DRY (#6924)
      
      This breaks up the monolithic Makefile for the Go based runners into a
      set of utility files as well as recursive Makefiles for the runners.
      Files starting with the name "Makefile" are buildable, while files that
      end with ".make" are utilities to include in other Makefiles.  This
      reduces the amount of nearly identical targets and helps set a pattern
      for future community contributions for new GPU runner architectures.
      
      When we are ready to switch over to the Go runners, these files should
      move to the top of the repo, and we should add targets for the main CLI,
      as well as a helper "install" (put all the built binaries on the local
      system in a runnable state) and "dist" target (generate the various
      tar/zip files for distribution) for local developer use.
      
      * llama: don't create extraneous directories (#6988)
      
      * llama: Exercise the new build in CI (#6989)
      
      Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
      
      * llama: Refine developer docs for Go server (#6842)
      
      This enhances the documentation for development focusing on the new Go
      server.  After we complete the transition further doc refinements
      can remove the "transition" discussion.
      
      * runner.go: Allocate batches for all sequences during init
      
      We should tell the model that we could have full batches for all
      sequences. We already do this when we allocate the batches but it was
      missed during initialization.
      
      * llama.go: Don't return nil from Tokenize on zero length input
      
      Potentially receiving nil in a non-error condition is surprising to
      most callers - it's better to return an empty slice.
      
      * runner.go: Remove stop tokens from cache
      
      If the last token is EOG then we don't return this and it isn't
      present in the cache (because it was never submitted to Decode).
      This works well for extending the cache entry with a new sequence.
      
      However, for multi-token stop sequences, we won't return any of the
      tokens but all but the last one will be in the cache. This means
      when the conversation continues the cache will contain tokens that
      don't overlap with the new prompt.
      
      This works (we will pick up the portion where there is overlap) but
      it causes unnecessary cache thrashing because we will fork the original
      cache entry as it is not a perfect match.
      
      By trimming the cache to the tokens that we actually return this
      issue can be avoided.
      
      * runner.go: Simplify flushing of pending tokens
      
      * runner.go: Update TODOs
      
      * runner.go: Don't panic when processing sequences
      
      If there is an error processing a sequence, we should return a
      clean HTTP error back to Ollama rather than panicing. This will
      make us more resilient to transient failures.
      
      Panics can still occur during startup as there is no way to serve
      requests if that fails.
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: More accurately capture timings
      
      Currently prompt processing time doesn't capture the that it takes
      to tokenize the input, only decoding time. We should capture the
      full process to more accurately reflect reality. This is especially
      true once we start processing images where the initial processing
      can take significant time. This is also more consistent with the
      existing C++ runner.
      
      * runner.go: Support for vision models
      
      In addition to bringing feature parity with the C++ runner, this also
      incorporates several improvements:
       - Cache prompting works with images, avoiding the need to re-decode
         embeddings for every message in a conversation
       - Parallelism is supported, avoiding the need to restrict to one
         sequence at a time. (Though for now Ollama will not schedule
         them while we might need to fall back to the old runner.)
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: Move Unicode checking code and add tests
      
      * runner.go: Export external cache members
      
      Runner and cache are in the same package so the change doesn't
      affect anything but it is more internally consistent.
      
      * runner.go: Image embedding cache
      
      Generating embeddings from images can take significant time (on
      my machine between 100ms and 8s depending on the model). Although
      we already cache the result of decoding these images, the embeddings
      need to be regenerated every time. This is not necessary if we get
      the same image over and over again, for example, during a conversation.
      
      This currently uses a very small cache with a very simple algorithm
      but it is easy to improve as is warranted.
      
      * llama: catch up on patches
      
      Carry forward solar-pro and cli-unicode patches
      
      * runner.go: Don't re-allocate memory for every batch
      
      We can reuse memory allocated from batch to batch since batch
      size is fixed. This both saves the cost of reallocation as well
      keeps the cache lines hot.
      
      This results in a roughly 1% performance improvement for token
      generation with Nvidia GPUs on Linux.
      
      * runner.go: Default to classic input cache policy
      
      The input cache as part of the go runner implemented a cache
      policy that aims to maximize hit rate in both single and multi-
      user scenarios. When there is a cache hit, the response is
      very fast.
      
      However, performance is actually slower when there is an input
      cache miss due to worse GPU VRAM locality. This means that
      performance is generally better overall for multi-user scenarios
      (better input cache hit rate, locality was relatively poor already).
      But worse for single users (input cache hit rate is about the same,
      locality is now worse).
      
      This defaults the policy back to the old one to avoid a regression
      but keeps the new one available through an environment variable
      OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
      to improve this in the future to get the best of both worlds
      without user configuration.
      
      For inputs that result in cache misses, on Nvidia/Linux this
      change improves performance by 31% for prompt processing and
      13% for token generation.
      
      * runner.go: Increase size of response channel
      
      Generally the CPU can easily keep up with handling reponses that
      are generated but there's no reason not to let generation continue
      and handle things in larger batches if needed.
      
      * llama: Add CI to verify all vendored changes have patches (#7066)
      
      Make sure we don't accidentally merge changes in the vendored code
      that aren't also reflected in the patches.
      
      * llama: adjust clip patch for mingw utf-16 (#7065)
      
      * llama: adjust clip patch for mingw utf-16
      
      * llama: ensure static linking of runtime libs
      
      Avoid runtime dependencies on non-standard libraries
      
      * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
      
      These are two features that are shown on llama.cpp's system info
      that are currently different between the two runners. On my test
      systems the performance difference is very small to negligible
      but it is probably still good to equalize the features.
      
      * llm: Don't add BOS/EOS for tokenize requests
      
      This is consistent with what server.cpp currently does. It affects
      things like token processing counts for embedding requests.
      
      * runner.go: Don't cache prompts for embeddings
      
      Our integration with server.cpp implicitly disables prompt caching
      because it is not part of the JSON object being parsed, this makes
      the Go runner behavior similarly.
      
      Prompt caching has been seen to affect the results of text completions
      on certain hardware. The results are not wrong either way but they
      are non-deterministic. However, embeddings seem to be affected even
      on hardware that does not show this behavior for completions. For
      now, it is best to maintain consistency with the existing behavior.
      
      * runner.go: Adjust debug log levels
      
      Add system info printed at startup and quiet down noisier logging.
      
      * llama: fix compiler flag differences (#7082)
      
      Adjust the flags for the new Go server to more closely match the
      generate flow
      
      * llama: refine developer docs (#7121)
      
      * llama: doc and example clean up (#7122)
      
      * llama: doc and example clean up
      
      * llama: Move new dockerfile into llama dir
      
      Temporary home until we fully transition to the Go server
      
      * llama: runner doc cleanup
      
      * llama.go: Add description for Tokenize error case
      
      ---------
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <dhiltgen@users.noreply.github.com>
      96efd905
  21. 26 Sep, 2024 1 commit
    • Blake Mizerany's avatar
      server: close response body on error (#6986) · 03608cb4
      Blake Mizerany authored
      This change closes the response body when an error occurs in
      makeRequestWithRetry. Previously, the first, non-200 response body was
      not closed before reattempting the request. This change ensures that
      the response body is closed in all cases where an error occurs,
      preventing leaks of file descriptors.
      
      Fixes #6974
      03608cb4
  22. 05 Sep, 2024 2 commits
  23. 23 Aug, 2024 1 commit
  24. 14 Aug, 2024 2 commits
  25. 08 Aug, 2024 1 commit
    • Jesse Gross's avatar
      manifest: Store layers inside manifests consistently as values. · 7edaf6e7
      Jesse Gross authored
      Commit 1829fb61 ("manifest: Fix crash on startup when trying to clean up
      unused files (#5840)") changed the config layer stored in manifests
      from a pointer to a value. This was done in order to avoid potential
      nil pointer dereferences after it is deserialized from JSON in the
      event that the field is missing.
      
      This changes the Layers slice to also be stored by value. This enables
      consistency in handling across the two objects.
      7edaf6e7
  26. 07 Aug, 2024 3 commits
    • Jesse Gross's avatar
      image: Clarify argument to WriteManifest is config · 97ec8cfd
      Jesse Gross authored
      When creating a model the config layer is appended to the list of
      layers and then the last layer is used as the config when writing the
      manifest. This change directly uses the config layer to write the
      manifest. There is no behavior change but it is less error prone.
      97ec8cfd
    • Jesse Gross's avatar
      manifest: Fix crash on startup when trying to clean up unused files (#5840) · 1829fb61
      Jesse Gross authored
      Currently if the config field is missing in the manifest file (or
      corrupted), Ollama will crash when it tries to read it. This can
      happen at startup or when pulling new models.
      
      This data is mostly just used for showing model information so we
      can be tolerant of it not being present - it is not required to
      run the models. Besides avoiding crashing, this also gives us the
      ability to restructure the config in the future by pulling it
      into the main manifest file.
      1829fb61
    • Jesse Gross's avatar
      manifest: Don't prune layers if we can't open a manifest file · 685a5353
      Jesse Gross authored
      If there is an error when opening a manifest file (corrupted, permission denied, etc.)
      then the referenced layers will not be included in the list of active
      layers. This causes them to be deleted when pruning happens at startup
      or a model is pulled.
      
      In such a situation, we should prefer to preserve data in the hopes that
      it can be recovered rather than being agressive about deletion.
      685a5353
  27. 02 Aug, 2024 1 commit
  28. 31 Jul, 2024 1 commit
  29. 26 Jul, 2024 1 commit
  30. 25 Jul, 2024 1 commit
  31. 22 Jul, 2024 1 commit
  32. 19 Jul, 2024 1 commit
  33. 16 Jul, 2024 1 commit
  34. 15 Jul, 2024 1 commit
  35. 05 Jul, 2024 1 commit
  36. 01 Jul, 2024 1 commit