1. 30 Oct, 2025 1 commit
  2. 29 Oct, 2025 1 commit
  3. 28 Oct, 2025 1 commit
  4. 17 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      test: harden scheduler tests (#12662) · 68e04c7f
      Daniel Hiltgen authored
      * test: harden scheduler tests
      
      This removes reschedDelay which was stale code, and adds
      a new configurable timeout for the waitForVRAMRecovery so
      tests can now set the timeout to be very short to avoid the
      scheduler getting stuck and hitting a test timeout.
      
      * test: tune tests for partial loads
      
      Give stress tests more time when the model is split between CPU/GPU
      68e04c7f
  5. 08 Oct, 2025 1 commit
  6. 01 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Use runners for GPU discovery (#12090) · bc8909fb
      Daniel Hiltgen authored
      This revamps how we discover GPUs in the system by leveraging the Ollama
      runner.  This should eliminate inconsistency between our GPU discovery and the
      runners capabilities at runtime, particularly for cases where we try to filter
      out unsupported GPUs.  Now the runner does that implicitly based on the actual
      device list.  In some cases free VRAM reporting can be unreliable which can
      leaad to scheduling mistakes, so this also includes a patch to leverage more
      reliable VRAM reporting libraries if available.
      
      Automatic workarounds have been removed as only one GPU leveraged this, which
      is now documented. This GPU will soon fall off the support matrix with the next
      ROCm bump.
      
      Additional cleanup of the scheduler and discovery packages can be done in the
      future once we have switched on the new memory management code, and removed
      support for the llama runner.
      bc8909fb
  7. 22 Sep, 2025 1 commit
  8. 12 Sep, 2025 1 commit
  9. 09 Sep, 2025 1 commit
    • Daniel Hiltgen's avatar
      tests: reduce stress on CPU to 2 models (#12161) · 67451828
      Daniel Hiltgen authored
      * tests: reduce stress on CPU to 2 models
      
      This should avoid flakes due to systems getting overloaded with 3 (or more) models running concurrently
      
      * tests: allow slow systems to pass on timeout
      
      If a slow system is still streaming a response, and the response
      will pass validation, don't fail just because the system is slow.
      
      * test: unload embedding models more quickly
      67451828
  10. 29 Aug, 2025 1 commit
    • Daniel Hiltgen's avatar
      perf: build graph for next batch async to keep GPU busy (#11863) · 517807cd
      Daniel Hiltgen authored
      * perf: build graph for next batch in parallel to keep GPU busy
      
      This refactors the main run loop of the ollama runner to perform the main GPU
      intensive tasks (Compute+Floats) in a go routine so we can prepare the next
      batch in parallel to reduce the amount of time the GPU stalls waiting for the
      next batch of work.
      
      * tests: tune integration tests for ollama engine
      
      This tunes the integration tests to focus more on models supported
      by the new engine.
      517807cd
  11. 15 Aug, 2025 1 commit
    • Daniel Hiltgen's avatar
      test: improve scheduler/concurrency stress tests (#11906) · d6f7233a
      Daniel Hiltgen authored
      * test: improve scheduler/concurrency stress tests
      
      The scheduler test used to use approximate memory figures and would often
      over or under shoot a systems capcity leading to flaky test results.
      This should improve the reliability of this scenario by leveraging
      ps output to determinie exactly how many models it takes to
      trigger thrashing.
      
      The concurrency test is also refined to target num_parallel + 1 and handle
      timeouts better.
      
      With these refinements, TestMultiModelConcurrency was redundant
      
      * test: add parallel generate with history
      
      TestGenerateWithHistory will help verify caching and context
      are properly handled while making requests
      
      * test: focus embed tests on embedding models
      
      remove non-embedding models from the embedding tests
      d6f7233a
  12. 14 Aug, 2025 1 commit
  13. 07 Aug, 2025 1 commit
  14. 11 Jul, 2025 1 commit
  15. 05 Jul, 2025 1 commit
    • Daniel Hiltgen's avatar
      int: add performance integration tests (#11173) · 4f473e22
      Daniel Hiltgen authored
      usage example:
        go test --tags=integration,perf -count 1 ./integration -v -timeout 1h -run TestModelsPerf 2>&1 | tee int.log
        cat int.log | grep MODEL_PERF_HEADER | cut -f2- -d: > perf.csv
        cat int.log | grep MODEL_PERF_DATA | cut -f2- -d: >> perf.csv
      4f473e22
  16. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  17. 04 May, 2025 1 commit
  18. 16 Apr, 2025 1 commit
  19. 02 Apr, 2025 1 commit
  20. 08 Oct, 2024 1 commit
    • Jeffrey Morgan's avatar
      Re-introduce the `llama` package (#5034) · 96efd905
      Jeffrey Morgan authored
      
      
      * Re-introduce the llama package
      
      This PR brings back the llama package, making it possible to call llama.cpp and
      ggml APIs from Go directly via CGo. This has a few advantages:
      
      - C APIs can be called directly from Go without needing to use the previous
        "server" REST API
      - On macOS and for CPU builds on Linux and Windows, Ollama can be built without
        a go generate ./... step, making it easy to get up and running to hack on
        parts of Ollama that don't require fast inference
      - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
        takes <5 min on a fast CPU)
      - No git submodule making it easier to clone and build from source
      
      This is a big PR, but much of it is vendor code except for:
      
      - llama.go CGo bindings
      - example/: a simple example of running inference
      - runner/: a subprocess server designed to replace the llm/ext_server package
      - Makefile an as minimal as possible Makefile to build the runner package for
        different targets (cpu, avx, avx2, cuda, rocm)
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      
      * cache: Clear old KV cache entries when evicting a slot
      
      When forking a cache entry, if no empty slots are available we
      evict the least recently used one and copy over the KV entries
      from the closest match. However, this copy does not overwrite
      existing values but only adds new ones. Therefore, we need to
      clear the old slot first.
      
      This change fixes two issues:
       - The KV cache fills up and runs out of space even though we think
         we are managing it correctly
       - Performance gets worse over time as we use new cache entries that
         are not hot in the processor caches
      
      * doc: explain golang objc linker warning (#6830)
      
      * llama: gather transitive dependencies for rocm for dist packaging (#6848)
      
      * Refine go server makefiles to be more DRY (#6924)
      
      This breaks up the monolithic Makefile for the Go based runners into a
      set of utility files as well as recursive Makefiles for the runners.
      Files starting with the name "Makefile" are buildable, while files that
      end with ".make" are utilities to include in other Makefiles.  This
      reduces the amount of nearly identical targets and helps set a pattern
      for future community contributions for new GPU runner architectures.
      
      When we are ready to switch over to the Go runners, these files should
      move to the top of the repo, and we should add targets for the main CLI,
      as well as a helper "install" (put all the built binaries on the local
      system in a runnable state) and "dist" target (generate the various
      tar/zip files for distribution) for local developer use.
      
      * llama: don't create extraneous directories (#6988)
      
      * llama: Exercise the new build in CI (#6989)
      
      Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
      
      * llama: Refine developer docs for Go server (#6842)
      
      This enhances the documentation for development focusing on the new Go
      server.  After we complete the transition further doc refinements
      can remove the "transition" discussion.
      
      * runner.go: Allocate batches for all sequences during init
      
      We should tell the model that we could have full batches for all
      sequences. We already do this when we allocate the batches but it was
      missed during initialization.
      
      * llama.go: Don't return nil from Tokenize on zero length input
      
      Potentially receiving nil in a non-error condition is surprising to
      most callers - it's better to return an empty slice.
      
      * runner.go: Remove stop tokens from cache
      
      If the last token is EOG then we don't return this and it isn't
      present in the cache (because it was never submitted to Decode).
      This works well for extending the cache entry with a new sequence.
      
      However, for multi-token stop sequences, we won't return any of the
      tokens but all but the last one will be in the cache. This means
      when the conversation continues the cache will contain tokens that
      don't overlap with the new prompt.
      
      This works (we will pick up the portion where there is overlap) but
      it causes unnecessary cache thrashing because we will fork the original
      cache entry as it is not a perfect match.
      
      By trimming the cache to the tokens that we actually return this
      issue can be avoided.
      
      * runner.go: Simplify flushing of pending tokens
      
      * runner.go: Update TODOs
      
      * runner.go: Don't panic when processing sequences
      
      If there is an error processing a sequence, we should return a
      clean HTTP error back to Ollama rather than panicing. This will
      make us more resilient to transient failures.
      
      Panics can still occur during startup as there is no way to serve
      requests if that fails.
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: More accurately capture timings
      
      Currently prompt processing time doesn't capture the that it takes
      to tokenize the input, only decoding time. We should capture the
      full process to more accurately reflect reality. This is especially
      true once we start processing images where the initial processing
      can take significant time. This is also more consistent with the
      existing C++ runner.
      
      * runner.go: Support for vision models
      
      In addition to bringing feature parity with the C++ runner, this also
      incorporates several improvements:
       - Cache prompting works with images, avoiding the need to re-decode
         embeddings for every message in a conversation
       - Parallelism is supported, avoiding the need to restrict to one
         sequence at a time. (Though for now Ollama will not schedule
         them while we might need to fall back to the old runner.)
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      
      * runner.go: Move Unicode checking code and add tests
      
      * runner.go: Export external cache members
      
      Runner and cache are in the same package so the change doesn't
      affect anything but it is more internally consistent.
      
      * runner.go: Image embedding cache
      
      Generating embeddings from images can take significant time (on
      my machine between 100ms and 8s depending on the model). Although
      we already cache the result of decoding these images, the embeddings
      need to be regenerated every time. This is not necessary if we get
      the same image over and over again, for example, during a conversation.
      
      This currently uses a very small cache with a very simple algorithm
      but it is easy to improve as is warranted.
      
      * llama: catch up on patches
      
      Carry forward solar-pro and cli-unicode patches
      
      * runner.go: Don't re-allocate memory for every batch
      
      We can reuse memory allocated from batch to batch since batch
      size is fixed. This both saves the cost of reallocation as well
      keeps the cache lines hot.
      
      This results in a roughly 1% performance improvement for token
      generation with Nvidia GPUs on Linux.
      
      * runner.go: Default to classic input cache policy
      
      The input cache as part of the go runner implemented a cache
      policy that aims to maximize hit rate in both single and multi-
      user scenarios. When there is a cache hit, the response is
      very fast.
      
      However, performance is actually slower when there is an input
      cache miss due to worse GPU VRAM locality. This means that
      performance is generally better overall for multi-user scenarios
      (better input cache hit rate, locality was relatively poor already).
      But worse for single users (input cache hit rate is about the same,
      locality is now worse).
      
      This defaults the policy back to the old one to avoid a regression
      but keeps the new one available through an environment variable
      OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
      to improve this in the future to get the best of both worlds
      without user configuration.
      
      For inputs that result in cache misses, on Nvidia/Linux this
      change improves performance by 31% for prompt processing and
      13% for token generation.
      
      * runner.go: Increase size of response channel
      
      Generally the CPU can easily keep up with handling reponses that
      are generated but there's no reason not to let generation continue
      and handle things in larger batches if needed.
      
      * llama: Add CI to verify all vendored changes have patches (#7066)
      
      Make sure we don't accidentally merge changes in the vendored code
      that aren't also reflected in the patches.
      
      * llama: adjust clip patch for mingw utf-16 (#7065)
      
      * llama: adjust clip patch for mingw utf-16
      
      * llama: ensure static linking of runtime libs
      
      Avoid runtime dependencies on non-standard libraries
      
      * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
      
      These are two features that are shown on llama.cpp's system info
      that are currently different between the two runners. On my test
      systems the performance difference is very small to negligible
      but it is probably still good to equalize the features.
      
      * llm: Don't add BOS/EOS for tokenize requests
      
      This is consistent with what server.cpp currently does. It affects
      things like token processing counts for embedding requests.
      
      * runner.go: Don't cache prompts for embeddings
      
      Our integration with server.cpp implicitly disables prompt caching
      because it is not part of the JSON object being parsed, this makes
      the Go runner behavior similarly.
      
      Prompt caching has been seen to affect the results of text completions
      on certain hardware. The results are not wrong either way but they
      are non-deterministic. However, embeddings seem to be affected even
      on hardware that does not show this behavior for completions. For
      now, it is best to maintain consistency with the existing behavior.
      
      * runner.go: Adjust debug log levels
      
      Add system info printed at startup and quiet down noisier logging.
      
      * llama: fix compiler flag differences (#7082)
      
      Adjust the flags for the new Go server to more closely match the
      generate flow
      
      * llama: refine developer docs (#7121)
      
      * llama: doc and example clean up (#7122)
      
      * llama: doc and example clean up
      
      * llama: Move new dockerfile into llama dir
      
      Temporary home until we fully transition to the Go server
      
      * llama: runner doc cleanup
      
      * llama.go: Add description for Tokenize error case
      
      ---------
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarDaniel Hiltgen <dhiltgen@users.noreply.github.com>
      96efd905
  21. 05 Aug, 2024 1 commit
  22. 02 Aug, 2024 1 commit
  23. 14 Jun, 2024 2 commits
    • Daniel Hiltgen's avatar
      refined test timing · 68dfc623
      Daniel Hiltgen authored
      adjust timing on some tests so they don't timeout on small/slow GPUs
      68dfc623
    • Daniel Hiltgen's avatar
      Improve multi-gpu handling at the limit · 6fd04ca9
      Daniel Hiltgen authored
      Still not complete, needs some refinement to our prediction to understand the
      discrete GPUs available space so we can see how many layers fit in each one
      since we can't split one layer across multiple GPUs we can't treat free space
      as one logical block
      6fd04ca9
  24. 10 May, 2024 1 commit
  25. 06 May, 2024 1 commit
  26. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  27. 01 Apr, 2024 1 commit
  28. 26 Mar, 2024 1 commit
  29. 25 Mar, 2024 1 commit
  30. 23 Mar, 2024 1 commit