1. 11 Dec, 2024 1 commit
  2. 10 Dec, 2024 3 commits
    • frob's avatar
      757eeacc
    • Stefan Weil's avatar
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  3. 09 Dec, 2024 1 commit
    • Jesse Gross's avatar
      prompt: Don't trim whitespace from prompts · 900f64e6
      Jesse Gross authored
      New lines can be an important part of a user's prompt and trimming
      it can alter the results. We previously only trimmed prompts with
      images but refactoring brought this behavior to all prompts, where
      it became more noticable.
      
      The /generate endpoint adds less whitespace and therefore doesn't
      need to trim it out - this brings the same behavior to /chat.
      
      Thanks to @gabe-l-hart for spotting the issue!
      
      Fixes #7795
      900f64e6
  4. 05 Dec, 2024 2 commits
  5. 30 Nov, 2024 2 commits
  6. 27 Nov, 2024 1 commit
  7. 25 Nov, 2024 1 commit
    • Blake Mizerany's avatar
      server: fix Transport override (#7834) · 2b7ed61c
      Blake Mizerany authored
      This changes makeRequest to update the http client Transport if and only
      if testMakeRequestDialContext is set. This is to avoid overriding the
      default Transport when testMakeRequestDialContext is nil, which broke
      existing behavior, included proxies, timeouts, and other behaviors.
      
      Fixes #7829
      Fixes #7788
      2b7ed61c
  8. 23 Nov, 2024 1 commit
  9. 22 Nov, 2024 1 commit
    • Bruce MacDonald's avatar
      server: remove out of date anonymous access check (#7785) · 7b5585b9
      Bruce MacDonald authored
      In the past the ollama.com server would return a JWT that contained
      information about the user being authenticated. This was used to return
      different error messages to the user. This is no longer possible since the
      token used to authenticate does not contain information about the user
      anymore. Removing this code that no longer works.
      
      Follow up changes will improve the error messages returned here, but good to
      clean up first.
      7b5585b9
  10. 20 Nov, 2024 1 commit
  11. 19 Nov, 2024 1 commit
    • Blake Mizerany's avatar
      server: allow mixed-case model names on push, pull, cp, and create (#7676) · 4b8a2e34
      Blake Mizerany authored
      This change allows for mixed-case model names to be pushed, pulled,
      copied, and created, which was previously disallowed because the Ollama
      registry was backed by a Docker registry that enforced a naming
      convention that disallowed mixed-case names, which is no longer the
      case.
      
      This does not break existing, intended, behaviors.
      
      Also, make TestCase test a story of creating, updating, pulling, and
      copying a model with case variations, ensuring the model's manifest is
      updated correctly, and not duplicated across different files with
      different case variations.
      4b8a2e34
  12. 17 Nov, 2024 1 commit
  13. 06 Nov, 2024 1 commit
    • Jesse Gross's avatar
      sched: Lift parallel restriction for multimodal models except mllama · 6cd56687
      Jesse Gross authored
      The Go runner does not have a problem with supporting parallel
      requests for most multimodal models. Now that we won't be potentially
      falling back to server.cpp, this restriction can be lifted.
      
      However, the new mllama model can't support parallel requests, so we
      will need to keep a restriction for that.
      6cd56687
  14. 05 Nov, 2024 2 commits
    • Daniel Hiltgen's avatar
      One corrupt manifest should not wedge model operations (#7515) · a4c70fe1
      Daniel Hiltgen authored
      One potential failure mode is an empty file which bubbles up as an EOF error,
      leading to all pulls and listing operations failing.  Instead, continue and
      warn about the corrupt manifest.  This also allows re-pulling the corrupt
      manifest to repair the system.
      a4c70fe1
    • Jesse Gross's avatar
      prompt: Use a single token when estimating mllama context size · 34a75102
      Jesse Gross authored
      Currently we assume that images take 768 tokens of context size for
      the purposes of clipping old messages that exceed the context window.
      However, our mllama implementation stores the full image embedding
      in a single token. As a result, there is significant waste of context
      space.
      
      Ideally, we would handle this more generically and have the
      implementation report the number of tokens. However, at the moment
      this would just result in a similar set of 'if' conditions in the
      runner plus APIs to report it back. So for now, we just keep this
      simple.
      34a75102
  15. 04 Nov, 2024 1 commit
  16. 30 Oct, 2024 1 commit
    • Jesse Gross's avatar
      runner.go: Better abstract vision model integration · c826e574
      Jesse Gross authored
      
      
      -Update mllama to take the cross attention state as embeddings in
      a batch, more similar to how Llava handles it. This improves
      integration with the input cache.
      -Pass locations in a prompt for embeddings using tags similar to Llava.
      -Abstract interface to vision models so the main runner accesses Clip
      and Mllama similarly
      Co-authored-by: default avatarMichael Yang <mxyng@pm.me>
      c826e574
  17. 29 Oct, 2024 1 commit
  18. 28 Oct, 2024 1 commit
  19. 18 Oct, 2024 1 commit
  20. 17 Oct, 2024 1 commit
  21. 08 Oct, 2024 1 commit
    • Jeffrey Morgan's avatar
      Re-introduce the `llama` package (#5034) · 96efd905
      Jeffrey Morgan authored
      
      
      * Re-introduce the llama package
      
      This PR brings back the llama package, making it possible to call llama.cpp and
      ggml APIs from Go directly via CGo. This has a few advantages:
      
      - C APIs can be called directly from Go without needing to use the previous
        "server" REST API
      - On macOS and for CPU builds on Linux and Windows, Ollama can be built without
        a go generate ./... step, making it easy to get up and running to hack on
        parts of Ollama that don't require fast inference
      - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
        takes <5 min on a fast CPU)
      - No git submodule making it easier to clone and build from source
      
      This is a big PR, but much of it is vendor code except for:
      
      - llama.go CGo bindings
      - example/: a simple example of running inference
      - runner/: a subprocess server designed to replace the llm/ext_server package
      - Makefile an as minimal as possible Makefile to build the runner package for
        different targets (cpu, avx, avx2, cuda, rocm)
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      
      * cache: Clear old KV cache entries when evicting a slot
      
      When forking a cache entry, if no empty slots are available we
      evict the least recently used one and copy over the KV entries
      from the closest match. However, this copy does not overwrite
      existing values but only adds new ones. Therefore, we need to
      clear the old slot first.
      
      This change fixes two issues:
       - The KV cache fills up and runs out of space even though we think
         we are managing it correctly
       - Performance gets worse over time as we use new cache entries that
         are not hot in the processor caches
      
      * doc: explain golang objc linker warning (#6830)
      
      * llama: gather transitive dependencies for rocm for dist packaging (#6848)
      
      * Refine go server makefiles to be more DRY (#6924)
      
      This breaks up the monolithic Makefile for the Go based runners into a
      set of utility files as well as recursive Makefiles for the runners.
      Files starting with the name "Makefile" are buildable, while files that
      end with ".make" are utilities to include in other Makefiles.  This
      reduces the amount of nearly identical targets and helps set a pattern
      for future community contributions for new GPU runner architectures.
      
      When we are ready to switch over to the Go runners, these files should
      move to the top of the repo, and we should add targets for the main CLI,
      as well as a helper "install" (put all the built binaries on the local
      system in a runnable state) and "dist" target (generate the various
      tar/zip files for distribution) for local developer use.
      
      * llama: don't create extraneous directories (#6988)
      
      * llama: Exercise the new build in CI (#6989)
      
      Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
      
      * llama: Refine developer docs for Go server (#6842)
      
      This enhances the documentation for development focusing on the new Go
      server.  After we complete the transition further doc refinements
      can remove the "transition" discussion.
      
      * runner.go: Allocate batches for all sequences during init
      
      We should tell the model that we could have full batches for all
      sequences. We already do this when we allocate the batches but it was
      missed during initialization.
      
      * llama.go: Don't return nil from Tokenize on zero length input
      
      Potentially receiving nil in a non-error condition is surprising to
      most callers - it's better to return an empty slice.
      
      * runner.go: Remove stop tokens from cache
      
      If the last token is EOG then we don't return this and it isn't
      present in the cache (because it was never submitted to Decode).
      This works well for extending the cache entry with a new sequence.
      
      However, for multi-token stop sequences, we won't return any of the
      tokens but all but the last one will be in the cache. This means
      when the conversation continues the cache will contain tokens that
      don't overlap with the new prompt.
      
      This works (we will pick up the portion where there is overlap) but
      it causes unnecessary cache thrashing because we will fork the original
      cache entry as it is not a perfect match.
      
      By trimming the cache to the tokens that we actually return this
      issue can be avoided.
      
      * runner.go: Simplify flushing of pending tokens
      
      * runner.go: Update TODOs
      
      * runner.go: Don't panic when processing sequences
      
      If there is an error processing a sequence, we should return a
      clean HTTP error back to Ollama rather than panicing. This will
      make us more resilient to transient failures.
      
      Panics can still occur during startup as there is no way to serve
      requests if that fails.
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: More accurately capture timings
      
      Currently prompt processing time doesn't capture the that it takes
      to tokenize the input, only decoding time. We should capture the
      full process to more accurately reflect reality. This is especially
      true once we start processing images where the initial processing
      can take significant time. This is also more consistent with the
      existing C++ runner.
      
      * runner.go: Support for vision models
      
      In addition to bringing feature parity with the C++ runner, this also
      incorporates several improvements:
       - Cache prompting works with images, avoiding the need to re-decode
         embeddings for every message in a conversation
       - Parallelism is supported, avoiding the need to restrict to one
         sequence at a time. (Though for now Ollama will not schedule
         them while we might need to fall back to the old runner.)
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: Move Unicode checking code and add tests
      
      * runner.go: Export external cache members
      
      Runner and cache are in the same package so the change doesn't
      affect anything but it is more internally consistent.
      
      * runner.go: Image embedding cache
      
      Generating embeddings from images can take significant time (on
      my machine between 100ms and 8s depending on the model). Although
      we already cache the result of decoding these images, the embeddings
      need to be regenerated every time. This is not necessary if we get
      the same image over and over again, for example, during a conversation.
      
      This currently uses a very small cache with a very simple algorithm
      but it is easy to improve as is warranted.
      
      * llama: catch up on patches
      
      Carry forward solar-pro and cli-unicode patches
      
      * runner.go: Don't re-allocate memory for every batch
      
      We can reuse memory allocated from batch to batch since batch
      size is fixed. This both saves the cost of reallocation as well
      keeps the cache lines hot.
      
      This results in a roughly 1% performance improvement for token
      generation with Nvidia GPUs on Linux.
      
      * runner.go: Default to classic input cache policy
      
      The input cache as part of the go runner implemented a cache
      policy that aims to maximize hit rate in both single and multi-
      user scenarios. When there is a cache hit, the response is
      very fast.
      
      However, performance is actually slower when there is an input
      cache miss due to worse GPU VRAM locality. This means that
      performance is generally better overall for multi-user scenarios
      (better input cache hit rate, locality was relatively poor already).
      But worse for single users (input cache hit rate is about the same,
      locality is now worse).
      
      This defaults the policy back to the old one to avoid a regression
      but keeps the new one available through an environment variable
      OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
      to improve this in the future to get the best of both worlds
      without user configuration.
      
      For inputs that result in cache misses, on Nvidia/Linux this
      change improves performance by 31% for prompt processing and
      13% for token generation.
      
      * runner.go: Increase size of response channel
      
      Generally the CPU can easily keep up with handling reponses that
      are generated but there's no reason not to let generation continue
      and handle things in larger batches if needed.
      
      * llama: Add CI to verify all vendored changes have patches (#7066)
      
      Make sure we don't accidentally merge changes in the vendored code
      that aren't also reflected in the patches.
      
      * llama: adjust clip patch for mingw utf-16 (#7065)
      
      * llama: adjust clip patch for mingw utf-16
      
      * llama: ensure static linking of runtime libs
      
      Avoid runtime dependencies on non-standard libraries
      
      * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
      
      These are two features that are shown on llama.cpp's system info
      that are currently different between the two runners. On my test
      systems the performance difference is very small to negligible
      but it is probably still good to equalize the features.
      
      * llm: Don't add BOS/EOS for tokenize requests
      
      This is consistent with what server.cpp currently does. It affects
      things like token processing counts for embedding requests.
      
      * runner.go: Don't cache prompts for embeddings
      
      Our integration with server.cpp implicitly disables prompt caching
      because it is not part of the JSON object being parsed, this makes
      the Go runner behavior similarly.
      
      Prompt caching has been seen to affect the results of text completions
      on certain hardware. The results are not wrong either way but they
      are non-deterministic. However, embeddings seem to be affected even
      on hardware that does not show this behavior for completions. For
      now, it is best to maintain consistency with the existing behavior.
      
      * runner.go: Adjust debug log levels
      
      Add system info printed at startup and quiet down noisier logging.
      
      * llama: fix compiler flag differences (#7082)
      
      Adjust the flags for the new Go server to more closely match the
      generate flow
      
      * llama: refine developer docs (#7121)
      
      * llama: doc and example clean up (#7122)
      
      * llama: doc and example clean up
      
      * llama: Move new dockerfile into llama dir
      
      Temporary home until we fully transition to the Go server
      
      * llama: runner doc cleanup
      
      * llama.go: Add description for Tokenize error case
      
      ---------
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <dhiltgen@users.noreply.github.com>
      96efd905
  22. 01 Oct, 2024 1 commit
  23. 26 Sep, 2024 1 commit
    • Blake Mizerany's avatar
      server: close response body on error (#6986) · 03608cb4
      Blake Mizerany authored
      This change closes the response body when an error occurs in
      makeRequestWithRetry. Previously, the first, non-200 response body was
      not closed before reattempting the request. This change ensures that
      the response body is closed in all cases where an error occurs,
      preventing leaks of file descriptors.
      
      Fixes #6974
      03608cb4
  24. 20 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Add Windows arm64 support to official builds (#5712) · d632e23f
      Daniel Hiltgen authored
      * Unified arm/x86 windows installer
      
      This adjusts the installer payloads to be architecture aware so we can cary
      both amd64 and arm64 binaries in the installer, and install only the applicable
      architecture at install time.
      
      * Include arm64 in official windows build
      
      * Harden schedule test for slow windows timers
      
      This test seems to be a bit flaky on windows, so give it more time to converge
      d632e23f
  25. 18 Sep, 2024 1 commit
  26. 12 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Optimize container images for startup (#6547) · cd5c8f64
      Daniel Hiltgen authored
      * Optimize container images for startup
      
      This change adjusts how to handle runner payloads to support
      container builds where we keep them extracted in the filesystem.
      This makes it easier to optimize the cpu/cuda vs cpu/rocm images for
      size, and should result in faster startup times for container images.
      
      * Refactor payload logic and add buildx support for faster builds
      
      * Move payloads around
      
      * Review comments
      
      * Converge to buildx based helper scripts
      
      * Use docker buildx action for release
      cd5c8f64
  27. 11 Sep, 2024 1 commit
  28. 05 Sep, 2024 3 commits
  29. 28 Aug, 2024 2 commits
  30. 27 Aug, 2024 2 commits
  31. 23 Aug, 2024 1 commit