"vscode:/vscode.git/clone" did not exist on "0eb507f2af991b1f0b6c2ede5b20a994999e85d3"
  1. 29 Oct, 2024 1 commit
  2. 18 Oct, 2024 1 commit
  3. 17 Oct, 2024 2 commits
    • Gabe Goodhart's avatar
      IBM granite/granitemoe architecture support (#6760) · f2890a44
      Gabe Goodhart authored
      * fix(ext_server): Port llama.cpp sampling refactors to ext_server
      
      This was a fairly large changeset. I closely followed the changes here:
      https://github.com/ggerganov/llama.cpp/commit/df270ef74596da8f1178f08991f4c51f18c9ee82
      
      
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(server.cpp): Refactor server.cpp logging for llama.cpp overhaul
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * feat: Bump llama.cpp to the latest master with `granite` support
      
      This does not yet have granite MoE support, but that can come in a
      follow up PR
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(patches): Update all patches (except solar-pro) to work with bumped llama.cpp
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(solar): Update solar patch for llama.cpp bump
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * feat(llama.cpp): Bump llama.cpp for granitemoe support
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * feat(llama.cpp): Bump llama.cpp for granitemoe support
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(solar): Update the solar-pro patch for latest llama.cpp bump
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * feat(llama.cpp): Bump to the latest master of llama.cpp
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(patches): Update all patches for latest bump
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * feat(llama): Always run sync.sh from the right directory
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama/patches): Update llama patches
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * feat(llama)!: Rough sync with llama.cpp submodule
      
      There are a number of changes that will need to be propagated to llama.go
      before any of this works!
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama/patches): Add a patch and update for missing ggml-impl.h include
      
      This include is where the ggml_cgraph struct is defined. It is included in
      many of the .c files to define the forward declartion in ggml.h. It seems
      that with the subset of code included here, the import was somehow lost (or
      out-of-order) when building, so adding this include to llama.cpp fixes the
      missing definition.
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama/sync): Add missing ggml-cpu-impl.h copy-over in sync.sh
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Add missing log.cpp
      
      This was added as part of the logging overhaul done in llama.cpp
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Overhaul use of sampling module for llama.cpp changes
      
      The changes here reflect the changes made in the big llama.cpp sampling PR
      https://github.com/ggerganov/llama.cpp/pull/9294
      
      
      
      The sampling functionality is now broken into the base interface
      (llama_sampler) and the generation implementation (gpt_sampler). The
      changes here reflect that. Since the sampling.h/sampling.cpp code uses c++
      STL headers, the sampling_ext.[h|cpp] wrapper is maintained to allow go to
      access a pure-C interface.
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Fix the impl of SampleTokenGreedy for new sampling
      
      I don't think this method is currently used, so it could probably just be
      removed so that all sampling goes through the GPT interface, but in the
      interest of doing no harm, this should keep the method working as expected.
      
      Branch: IBMGraniteArchitectureSupport
      
      * fix(llama): Remove unused SampleTokenGreedy
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(sync): Remove bash-specific change to sync.sh
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * chore(gofumpt): Format on llama.go to pass linting
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llm): Fix missing <thread> include in ext_server
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Remove TODO about grammar_first
      
      This feature was not used/needed previously so should be fine without
      plumbing it through now.
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Better naming for sampling wrapper and args
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Fix patch 05 to use new wrapper api and re-sync
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * runner: Flush pending responses before returning
      
      If there are any pending reponses (such as from potential stop
      tokens) then we should send them back before ending the sequence.
      Otherwise, we can be missing tokens at the end of a response.
      
      Fixes #6707
      
      * fix(llama/sampling): Use gpt_sampler with a forward declaration
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llama): Remove unnecessary patch for gguf impl header
      
      This was caused by an earlier mistake in the embeddings patch that was
      dereferencing the pointer instead of using the wrapper API.
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      * fix(llm): Remove use of deprecated --log-disable flag
      
      Branch: IBMGraniteArchitectureSupport
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      
      ---------
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      f2890a44
    • Daniel Hiltgen's avatar
      Rename gpu package discover (#7143) · 05cd82ef
      Daniel Hiltgen authored
      Cleaning up go package naming
      05cd82ef
  4. 15 Oct, 2024 2 commits
  5. 12 Oct, 2024 1 commit
  6. 10 Oct, 2024 1 commit
    • Jesse Gross's avatar
      server: Don't clear cmd when closing a server · 03408f34
      Jesse Gross authored
      Close can be called on an LLM server if the runner subprocess dies.
      However, the Ollama scheduler code may not know about this yet and
      still try to access it. In this case, it is important that 'cmd'
      is still available as it is used to check on the status of the
      subprocess. If this happens, Kill may be called twice on the subprocess -
      that is fine.
      
      In addition, model unloading may race with new accesses, so we should
      hold a lock around this. This may result in the model being reloaded
      after the first close call - this is also fine as close will be called
      again later.
      03408f34
  7. 08 Oct, 2024 2 commits
    • Daniel Hiltgen's avatar
      Fix build leakages (#7141) · f9584deb
      Daniel Hiltgen authored
      The recent change to applying patches leaves the submodule dirty based on
      "new commits" being present.  This ensures we clean up so the tree no longer
      reports dirty after a `go generate ./...` run.
      
      The Makefile was being a bit too aggressive in cleaning things up and would result in deleting the placeholder files which someone might accidentally commit.
      f9584deb
    • Jeffrey Morgan's avatar
      Re-introduce the `llama` package (#5034) · 96efd905
      Jeffrey Morgan authored
      
      
      * Re-introduce the llama package
      
      This PR brings back the llama package, making it possible to call llama.cpp and
      ggml APIs from Go directly via CGo. This has a few advantages:
      
      - C APIs can be called directly from Go without needing to use the previous
        "server" REST API
      - On macOS and for CPU builds on Linux and Windows, Ollama can be built without
        a go generate ./... step, making it easy to get up and running to hack on
        parts of Ollama that don't require fast inference
      - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
        takes <5 min on a fast CPU)
      - No git submodule making it easier to clone and build from source
      
      This is a big PR, but much of it is vendor code except for:
      
      - llama.go CGo bindings
      - example/: a simple example of running inference
      - runner/: a subprocess server designed to replace the llm/ext_server package
      - Makefile an as minimal as possible Makefile to build the runner package for
        different targets (cpu, avx, avx2, cuda, rocm)
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      
      * cache: Clear old KV cache entries when evicting a slot
      
      When forking a cache entry, if no empty slots are available we
      evict the least recently used one and copy over the KV entries
      from the closest match. However, this copy does not overwrite
      existing values but only adds new ones. Therefore, we need to
      clear the old slot first.
      
      This change fixes two issues:
       - The KV cache fills up and runs out of space even though we think
         we are managing it correctly
       - Performance gets worse over time as we use new cache entries that
         are not hot in the processor caches
      
      * doc: explain golang objc linker warning (#6830)
      
      * llama: gather transitive dependencies for rocm for dist packaging (#6848)
      
      * Refine go server makefiles to be more DRY (#6924)
      
      This breaks up the monolithic Makefile for the Go based runners into a
      set of utility files as well as recursive Makefiles for the runners.
      Files starting with the name "Makefile" are buildable, while files that
      end with ".make" are utilities to include in other Makefiles.  This
      reduces the amount of nearly identical targets and helps set a pattern
      for future community contributions for new GPU runner architectures.
      
      When we are ready to switch over to the Go runners, these files should
      move to the top of the repo, and we should add targets for the main CLI,
      as well as a helper "install" (put all the built binaries on the local
      system in a runnable state) and "dist" target (generate the various
      tar/zip files for distribution) for local developer use.
      
      * llama: don't create extraneous directories (#6988)
      
      * llama: Exercise the new build in CI (#6989)
      
      Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
      
      * llama: Refine developer docs for Go server (#6842)
      
      This enhances the documentation for development focusing on the new Go
      server.  After we complete the transition further doc refinements
      can remove the "transition" discussion.
      
      * runner.go: Allocate batches for all sequences during init
      
      We should tell the model that we could have full batches for all
      sequences. We already do this when we allocate the batches but it was
      missed during initialization.
      
      * llama.go: Don't return nil from Tokenize on zero length input
      
      Potentially receiving nil in a non-error condition is surprising to
      most callers - it's better to return an empty slice.
      
      * runner.go: Remove stop tokens from cache
      
      If the last token is EOG then we don't return this and it isn't
      present in the cache (because it was never submitted to Decode).
      This works well for extending the cache entry with a new sequence.
      
      However, for multi-token stop sequences, we won't return any of the
      tokens but all but the last one will be in the cache. This means
      when the conversation continues the cache will contain tokens that
      don't overlap with the new prompt.
      
      This works (we will pick up the portion where there is overlap) but
      it causes unnecessary cache thrashing because we will fork the original
      cache entry as it is not a perfect match.
      
      By trimming the cache to the tokens that we actually return this
      issue can be avoided.
      
      * runner.go: Simplify flushing of pending tokens
      
      * runner.go: Update TODOs
      
      * runner.go: Don't panic when processing sequences
      
      If there is an error processing a sequence, we should return a
      clean HTTP error back to Ollama rather than panicing. This will
      make us more resilient to transient failures.
      
      Panics can still occur during startup as there is no way to serve
      requests if that fails.
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: More accurately capture timings
      
      Currently prompt processing time doesn't capture the that it takes
      to tokenize the input, only decoding time. We should capture the
      full process to more accurately reflect reality. This is especially
      true once we start processing images where the initial processing
      can take significant time. This is also more consistent with the
      existing C++ runner.
      
      * runner.go: Support for vision models
      
      In addition to bringing feature parity with the C++ runner, this also
      incorporates several improvements:
       - Cache prompting works with images, avoiding the need to re-decode
         embeddings for every message in a conversation
       - Parallelism is supported, avoiding the need to restrict to one
         sequence at a time. (Though for now Ollama will not schedule
         them while we might need to fall back to the old runner.)
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: Move Unicode checking code and add tests
      
      * runner.go: Export external cache members
      
      Runner and cache are in the same package so the change doesn't
      affect anything but it is more internally consistent.
      
      * runner.go: Image embedding cache
      
      Generating embeddings from images can take significant time (on
      my machine between 100ms and 8s depending on the model). Although
      we already cache the result of decoding these images, the embeddings
      need to be regenerated every time. This is not necessary if we get
      the same image over and over again, for example, during a conversation.
      
      This currently uses a very small cache with a very simple algorithm
      but it is easy to improve as is warranted.
      
      * llama: catch up on patches
      
      Carry forward solar-pro and cli-unicode patches
      
      * runner.go: Don't re-allocate memory for every batch
      
      We can reuse memory allocated from batch to batch since batch
      size is fixed. This both saves the cost of reallocation as well
      keeps the cache lines hot.
      
      This results in a roughly 1% performance improvement for token
      generation with Nvidia GPUs on Linux.
      
      * runner.go: Default to classic input cache policy
      
      The input cache as part of the go runner implemented a cache
      policy that aims to maximize hit rate in both single and multi-
      user scenarios. When there is a cache hit, the response is
      very fast.
      
      However, performance is actually slower when there is an input
      cache miss due to worse GPU VRAM locality. This means that
      performance is generally better overall for multi-user scenarios
      (better input cache hit rate, locality was relatively poor already).
      But worse for single users (input cache hit rate is about the same,
      locality is now worse).
      
      This defaults the policy back to the old one to avoid a regression
      but keeps the new one available through an environment variable
      OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
      to improve this in the future to get the best of both worlds
      without user configuration.
      
      For inputs that result in cache misses, on Nvidia/Linux this
      change improves performance by 31% for prompt processing and
      13% for token generation.
      
      * runner.go: Increase size of response channel
      
      Generally the CPU can easily keep up with handling reponses that
      are generated but there's no reason not to let generation continue
      and handle things in larger batches if needed.
      
      * llama: Add CI to verify all vendored changes have patches (#7066)
      
      Make sure we don't accidentally merge changes in the vendored code
      that aren't also reflected in the patches.
      
      * llama: adjust clip patch for mingw utf-16 (#7065)
      
      * llama: adjust clip patch for mingw utf-16
      
      * llama: ensure static linking of runtime libs
      
      Avoid runtime dependencies on non-standard libraries
      
      * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
      
      These are two features that are shown on llama.cpp's system info
      that are currently different between the two runners. On my test
      systems the performance difference is very small to negligible
      but it is probably still good to equalize the features.
      
      * llm: Don't add BOS/EOS for tokenize requests
      
      This is consistent with what server.cpp currently does. It affects
      things like token processing counts for embedding requests.
      
      * runner.go: Don't cache prompts for embeddings
      
      Our integration with server.cpp implicitly disables prompt caching
      because it is not part of the JSON object being parsed, this makes
      the Go runner behavior similarly.
      
      Prompt caching has been seen to affect the results of text completions
      on certain hardware. The results are not wrong either way but they
      are non-deterministic. However, embeddings seem to be affected even
      on hardware that does not show this behavior for completions. For
      now, it is best to maintain consistency with the existing behavior.
      
      * runner.go: Adjust debug log levels
      
      Add system info printed at startup and quiet down noisier logging.
      
      * llama: fix compiler flag differences (#7082)
      
      Adjust the flags for the new Go server to more closely match the
      generate flow
      
      * llama: refine developer docs (#7121)
      
      * llama: doc and example clean up (#7122)
      
      * llama: doc and example clean up
      
      * llama: Move new dockerfile into llama dir
      
      Temporary home until we fully transition to the Go server
      
      * llama: runner doc cleanup
      
      * llama.go: Add description for Tokenize error case
      
      ---------
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <dhiltgen@users.noreply.github.com>
      96efd905
  8. 21 Sep, 2024 1 commit
  9. 20 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Add Windows arm64 support to official builds (#5712) · d632e23f
      Daniel Hiltgen authored
      * Unified arm/x86 windows installer
      
      This adjusts the installer payloads to be architecture aware so we can cary
      both amd64 and arm64 binaries in the installer, and install only the applicable
      architecture at install time.
      
      * Include arm64 in official windows build
      
      * Harden schedule test for slow windows timers
      
      This test seems to be a bit flaky on windows, so give it more time to converge
      d632e23f
  10. 18 Sep, 2024 1 commit
  11. 17 Sep, 2024 1 commit
    • Michael Yang's avatar
      make patches git am-able · 7bd7b027
      Michael Yang authored
      raw diffs can be applied using `git apply` but not with `git am`. git
      patches, e.g. through `git format-patch` are both apply-able and am-able
      7bd7b027
  12. 13 Sep, 2024 1 commit
  13. 12 Sep, 2024 2 commits
    • Daniel Hiltgen's avatar
      Use GOARCH for build dirs (#6779) · fda0d3be
      Daniel Hiltgen authored
      Corrects x86_64 vs amd64 discrepancy
      fda0d3be
    • Daniel Hiltgen's avatar
      Optimize container images for startup (#6547) · cd5c8f64
      Daniel Hiltgen authored
      * Optimize container images for startup
      
      This change adjusts how to handle runner payloads to support
      container builds where we keep them extracted in the filesystem.
      This makes it easier to optimize the cpu/cuda vs cpu/rocm images for
      size, and should result in faster startup times for container images.
      
      * Refactor payload logic and add buildx support for faster builds
      
      * Move payloads around
      
      * Review comments
      
      * Converge to buildx based helper scripts
      
      * Use docker buildx action for release
      cd5c8f64
  14. 11 Sep, 2024 1 commit
    • Jesse Gross's avatar
      runner: Flush pending responses before returning · 93ac3760
      Jesse Gross authored
      If there are any pending reponses (such as from potential stop
      tokens) then we should send them back before ending the sequence.
      Otherwise, we can be missing tokens at the end of a response.
      
      Fixes #6707
      93ac3760
  15. 10 Sep, 2024 1 commit
  16. 06 Sep, 2024 1 commit
  17. 05 Sep, 2024 2 commits
  18. 04 Sep, 2024 2 commits
  19. 03 Sep, 2024 2 commits
    • Daniel Hiltgen's avatar
      Log system memory at info (#6617) · 037a4d10
      Daniel Hiltgen authored
      On systems with low system memory, we can hit allocation failures that are difficult to diagnose
      without debug logs.  This will make it easier to spot.
      037a4d10
    • FellowTraveler's avatar
      Fix sprintf to snprintf (#5664) · 94fff580
      FellowTraveler authored
      /Users/au/src/ollama/llm/ext_server/server.cpp:289:9: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.
      94fff580
  20. 29 Aug, 2024 1 commit
  21. 27 Aug, 2024 1 commit
  22. 25 Aug, 2024 1 commit
  23. 23 Aug, 2024 2 commits
  24. 22 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix embeddings memory corruption (#6467) · 90ca8417
      Daniel Hiltgen authored
      * Fix embeddings memory corruption
      
      The patch was leading to a buffer overrun corruption.  Once removed though, parallism
      in server.cpp lead to hitting an assert due to slot/seq IDs being >= token count.  To
      work around this, only use slot 0 for embeddings.
      
      * Fix embed integration test assumption
      
      The token eval count has changed with recent llama.cpp bumps (0.3.5+)
      90ca8417
  25. 21 Aug, 2024 1 commit
  26. 20 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Split rocm back out of bundle (#6432) · a017cf2f
      Daniel Hiltgen authored
      We're over budget for github's maximum release artifact size with rocm + 2 cuda
      versions.  This splits rocm back out as a discrete artifact, but keeps the layout so it can
      be extracted into the same location as the main bundle.
      a017cf2f
  27. 19 Aug, 2024 6 commits