1. 19 May, 2025 1 commit
    • Jesse Gross's avatar
      llm: Consistently track unassigned model data · a2cc8571
      Jesse Gross authored
      In some cases, if we fail to assign a piece of the model to a GPU then
      we lose track of this data. Although it doesn't change the memory
      allocation, it does affect the total size of the model reported by
      tools such as ollama ps (and also the percent offloaded).
      
      This makes it look like setting num_gpu isn't reflected in ollama ps,
      which isn't true but the offloading percent may appear to not change.
      
      Spreading the model across more GPUs will continue to impact the
      reported total size of the model.
      a2cc8571
  2. 14 May, 2025 1 commit
  3. 13 May, 2025 2 commits
  4. 12 May, 2025 1 commit
  5. 08 May, 2025 1 commit
  6. 07 May, 2025 2 commits
    • Daniel Hiltgen's avatar
      sched: fix race leading to orphaned runners (#10599) · 5e380c3b
      Daniel Hiltgen authored
      If a model is loading, and the request context is canceled during the load
      by a client closing the connection, and another request is inbound for the
      same model with a different configuration (context size, etc.) thus requiring
      a reload, two unload events can be in flight.  The first shuts down the
      original model load, but the second one caused the loss of the new
      reloading runner reference, thus triggering the leak.
      
      The primary fix is detecting the duplicate unload and ignoring the second
      instance.  The load routine is also hardened to ensure we detect
      clobbering an already present runner and unload it with a warning.
      5e380c3b
    • Daniel Hiltgen's avatar
      remove cuda v11 (#10569) · fa393554
      Daniel Hiltgen authored
      This reduces the size of our Windows installer payloads by ~256M by dropping
      support for nvidia drivers older than Feb 2023.  Hardware support is unchanged.
      
      Linux default bundle sizes are reduced by ~600M to 1G.
      fa393554
  7. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  8. 05 May, 2025 3 commits
  9. 03 May, 2025 2 commits
    • Daniel Hiltgen's avatar
      win: ensure ollama paths come first (#10549) · 6a74bba7
      Daniel Hiltgen authored
      For all search path env vars make sure our dirs are first
      to avoid potentially finding other incompatible libraries
      on the users system.
      
      Also fixes a minor build script glitch for windows rocm
      6a74bba7
    • Daniel Hiltgen's avatar
      sched: logging improvements (#10550) · 76ea735a
      Daniel Hiltgen authored
      This enhances our logging in the scheduler.  The initial "waiting for server" log
      no longer claims an initial error state (now "not responding" which better reflects
      the actual state).  Runners now have slog wiring to report more details about the
      runner, including PID.
      76ea735a
  10. 30 Apr, 2025 1 commit
  11. 25 Apr, 2025 1 commit
  12. 24 Apr, 2025 1 commit
  13. 03 Apr, 2025 1 commit
    • Bruce MacDonald's avatar
      llm: set done reason at server level (#9830) · e53b3cbd
      Bruce MacDonald authored
      No functional change. Many different done reasons can be set at the runner
      level, so rather than obsuring them we should return them to the server
      process and let it choose what to do with the done reason. This separates
      the API concerns from the runner.
      e53b3cbd
  14. 26 Mar, 2025 2 commits
    • Jesse Gross's avatar
      ggml: Support heterogeneous KV cache layer sizes in memory estimation · f66216e3
      Jesse Gross authored
      Gemma3 uses sliding windows for its context on 5/6 layers, significantly
      reducing memory usage but leading to uneven usage across layers,
      which makes allocation to the correct GPU difficult. We currently
      estimate very conservatively by assuming all layers are consistent
      at the max size.
      
      Llama3.2-vision is also inconsistent between self attention and cross
      attention layers - at moment, we calculate the correct total size
      and then average this across layers. In some cases, this may lead
      to crashes if a large layer is placed on a GPU sized by the average.
      
      This allows memory estimation to calculate per-layer KV cache size
      and take this account when placing layers onto GPUs. We already do
      this for weights that vary per-tensor, so this is a logical extension.
      
      Fixes #9730
      Fixes #9890
      f66216e3
    • Jesse Gross's avatar
      llm: Fix debug logging for memory estimates · f4f0992b
      Jesse Gross authored
      f4f0992b
  15. 14 Mar, 2025 1 commit
    • Bruce MacDonald's avatar
      llm: remove internal subprocess req and resp types (#9324) · 3892c3a7
      Bruce MacDonald authored
      This commit refactors the LLM subsystem by removing internal subprocess
      request and response types. It consolidates duplicate type definitions
      across the codebase, moving them to centralized locations. The change also
      standardizes interfaces between components, simplifies the ServerStatusResp
      struct, and moves the ParseDurationMs function to a common package. This
      cleanup reduces code duplication between different runner implementations
      (llamarunner and ollamarunner).
      3892c3a7
  16. 13 Mar, 2025 1 commit
  17. 11 Mar, 2025 1 commit
  18. 10 Mar, 2025 1 commit
  19. 07 Mar, 2025 1 commit
    • Jesse Gross's avatar
      model: Don't unconditionally add special tokens · b70fc4d5
      Jesse Gross authored
      We sometimes tokenize partial strings. For example, with
      multimodal inputs, we split the input string around the images
      and then tokenize each piece. In these cases, we should only add
      the special tokens on the first piece.
      b70fc4d5
  20. 04 Mar, 2025 1 commit
    • Daniel Hiltgen's avatar
      New engine: vision models and auto-fallback (#9113) · 1fdb351c
      Daniel Hiltgen authored
      * Include unified vision layers in memory prediction
      
      For newer vision models with a single gguf, include
      the projection estimates.
      
      * Adjust CLI to handle both styles of vision model metadata
      
      * Wire up new tokenizers for new engine
      
      If we're loading the new engine, utilize the new model
      text processor instead of calling into cgo wrappers for
      llama.cpp.  This also cleans up some tech debt from the
      older tokenization flow for the C++ server which was
      no longer used.
      
      This also adjusts the grammar handling logic to pass
      through to the new engine instead of utilizing the cgo
      schema to grammar call.
      
      * Lay foundation for auto selection of new engine
      1fdb351c
  21. 24 Feb, 2025 1 commit
  22. 14 Feb, 2025 4 commits
    • Jeffrey Morgan's avatar
      llm: attempt to evaluate symlinks, but do not fail (#9089) · 5296f487
      Jeffrey Morgan authored
      provides a better approach to #9088 that will attempt to
      evaluate symlinks (important for macOS where 'ollama' is
      often a symlink), but use the result of os.Executable()
      as a fallback in scenarios where filepath.EvalSymlinks
      fails due to permission erorrs or other issues
      5296f487
    • Jeffrey Morgan's avatar
      llm: do not evaluate symlink for exe path lookup (#9088) · f05774b0
      Jeffrey Morgan authored
      In some cases, the directories in the executable path read by
      filepath.EvalSymlinks are not accessible, resulting in permission
      errors which results in an error when running models. It also
      doesn't work well on long paths on windows, also resulting in
      errors. This change removes filepath.EvalSymlinks when accessing
      os.Executable() altogether
      f05774b0
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  23. 04 Feb, 2025 1 commit
  24. 03 Feb, 2025 1 commit
  25. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  26. 08 Jan, 2025 1 commit
  27. 17 Dec, 2024 3 commits
    • Blake Mizerany's avatar
      llm: do not error on "null" format (#8139) · 2ddc32d5
      Blake Mizerany authored
      This fixes another regression in the previous commit that fixed other
      known bugs.
      2ddc32d5
    • Blake Mizerany's avatar
      llm: do not silently fail for supplied, but invalid formats (#8130) · 87f0a49f
      Blake Mizerany authored
      Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
      It also fixed a bug where the server would silently fail when clients
      requested invalid formats. It also, unfortunately, introduced a bug
      where the server would reject requests with an empty format, which
      should be allowed.
      
      The change in #8127 updated the code to allow the empty format, but also
      reintroduced the regression where the server would silently fail when
      the format was set, but invalid.
      
      This commit fixes both regressions. The server does not reject the empty
      format, but it does reject invalid formats. It also adds tests to help
      us catch regressions in the future.
      
      Also, the updated code provides a more detailed error message when a
      client sends a non-empty, but invalid format, echoing the invalid format
      in the response.
      
      This commits also takes the opportunity to remove superfluous linter
      checks.
      87f0a49f
    • Jeffrey Morgan's avatar
  28. 11 Dec, 2024 2 commits
    • Blake Mizerany's avatar
      llama: preserve field order in user-defined JSON schemas (#8002) · 9039c821
      Blake Mizerany authored
      Previously we decoded and re-encoded JSON schemas during validation,
      which served no purpose since json.RawMessage already validates JSON
      syntax. Worse, the re-encoding lost field ordering from the original
      schema, which affects inference quality during step-by-step reasoning.
      
      While fixing this ordering issue by using json.RawMessage directly,
      testing revealed that schema_to_grammar (from llama.cpp) also fails to
      preserve field order during grammar generation. This appears to be the
      root cause of inference degradation.
      
      This change prevents us from mangling the user's original schema order,
      but we still need to address the ordering issue in schema_to_grammar.
      That will be a separate change.
      
      Updates #7978
      9039c821
    • Jeffrey Morgan's avatar
      527cc978