1. 23 Sep, 2025 1 commit
  2. 18 Sep, 2025 1 commit
    • Michael Yang's avatar
      fix: model load for unsupported embedding models (#12311) · 9f3a37fd
      Michael Yang authored
      with #12181, there's now support for embeddings in ollama engine.
      this is done by mutating the architecture and adding _embed when it
      detects an embedding model. however this introduced a bug where if
      an embedding model was run based on an existing ollama engine model
      without an embedding implementation, e.g. llama4, it will pass the
      initial arch support check but fail when actually loaded.
      
      there's currently two entrypoints to creating a model. previously this
      second entrypoint was necessary because calling model.New would also
      load the model. since #11818, this is no longer th case so merge them
      to reduce complexity
      9f3a37fd
  3. 16 Sep, 2025 1 commit
  4. 15 Sep, 2025 1 commit
  5. 04 Sep, 2025 1 commit
  6. 02 Sep, 2025 1 commit
  7. 29 Aug, 2025 1 commit
    • Daniel Hiltgen's avatar
      perf: build graph for next batch async to keep GPU busy (#11863) · 517807cd
      Daniel Hiltgen authored
      * perf: build graph for next batch in parallel to keep GPU busy
      
      This refactors the main run loop of the ollama runner to perform the main GPU
      intensive tasks (Compute+Floats) in a go routine so we can prepare the next
      batch in parallel to reduce the amount of time the GPU stalls waiting for the
      next batch of work.
      
      * tests: tune integration tests for ollama engine
      
      This tunes the integration tests to focus more on models supported
      by the new engine.
      517807cd
  8. 14 Aug, 2025 1 commit
    • Michael Yang's avatar
      update vendored llama.cpp and ggml (#11823) · 1a19df1f
      Michael Yang authored
      * TEMPORARY: Update the llama.cpp upstream to my fork's Granite Four branch
      
      This will be redone once my branch is merged upstream in llama.cpp
      
      * feat: Update all patches
      
      There are a number that are no longer needed at all:
      
      - 0003-embeddings: Embeddings entirely overhauled on master
      - 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
          overhauled on master
      - 0019-metal-add-mean-kernel-14267: Merged upstream
      - 0020-CUDA-add-mean-operation-14313: Merged upstream
      
      * feat: Sync llama.cpp and ggml
      
      * fix: Update rsync-filter for all moved/new/removed files
      
      * fix: Add files missing from sync
      
      * fix: Update ggml rsync-filter for new ggml-cpu/arch subdirs
      
      * fix: Add ggml files missing from sync
      
      * fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files
      
      * fix: Remove mtmd main cpp files
      
      * fix: Add missing include in sampling_ext.cpp
      
      * fix: Update llama.go to use mtmd instead of clip/llava
      
      * fix: Add patch for mtmd_input_text
      
      * chore: Ignore *.patched in the patch directory
      
      * fix: Fix support for arch-specific ggml-cpu source files with new arrangement
      
      In https://github.com/ggml-org/llama.cpp/pull/13892, all arch-specific
      implementations were split out into a nested tree structure under
      ggml-cpu/arch. This conflicts with standard CGO layout where all
      arch-specific source files are expected to live in the same directory as
      the parent go module and use suffixes based on GOOS and GOARCH. As such,
      there were really two options for getting this to work:
      
      1. Add a patch on top of the GGML sync to rearrange the files to match the
      GO layout convention
      2. Use CGO directives to conditionally include the nested source files in
      the compilation units
      
      This commit does (2) in order to minimize the set of changes needed on top
      of the upstream file layout. To get this to work, there are two key things
      needed:
      
      1. In cpu.go, #cgo directives are added to explicitly set __${GOARCH}__ in
      the preprocessor directives
      2. In arch-impls.c|cpp, use an #ifdef | #elif defined | #endif chain to
      explicitly include the .c|.cpp files for the given architecture from the
      nested directory
      
      * fix: Use mtmd_helper to correctly load the bitmap for the image
      
      * fix: Apply patch for mtmd_text_input
      
      * fix: Add missing stb to llama.cpp rsync-filter
      
      * fix: Add sync'ed stb vendored header
      
      * fix: Use c++17 and include vendor for go wrapper modules
      
      * fix: Update patch 0015 for upstream implementation of uuid
      
      * feat: Bump to the latest tip of the branch
      
      * fix: Update patches for bump
      
      * feat: Bump back to the cenral repo and point at the latest master
      
      This includes granite 4 and a number of other model architectures!
      
      * fix: Revert changes to ggml export GPU UUID patch
      
      * fix: Add patch for GGML_VERSION and GGML_COMMIT constants
      
      * feat: Sync all patched code
      
      * build: Include cmake/common.cmake in ggml sync
      
      * build: Add top-level include for GNUINstallDirs in CMakeLists.txt
      
      This is used to populate CMAKE_INSTALL_BINDIR
      
      * fix: Add a patch to avoid power throttling API on non-msvc windows builds
      
      * fix: Sync patch changes for ggml-cpu.c
      
      * feat: Bump llama.cpp to 4a4f42
      
      This picks up support for Kimi K2 and PLaMO-2
      
      * feat: Sync llama.cpp
      
      * fix: Handle multi-chunk image encodings from mtmd
      
      * fix: Re-number patches after merge with `main`
      
      * feat: Bump to 41e78c in the makefile
      
      * fix: Fix Solar and argsort/copy patches after bump
      
      * fix: Remove Gemma3n CUDA Graphs patch
      
      It was implemented upstream:
      https://github.com/ggml-org/llama.cpp/pull/14741
      
      * feat: Sync llama.cpp / ggml after latest bump
      
      * build: Remove unnecessary CFLAGS definitions in cpu.go
      
      * fix: Remove unnecessary additions in the rsync-filter
      
      * fix: Remove unused vendored code for chat template parsing
      
      * Revert "fix: Remove Gemma3n CUDA Graphs patch"
      
      This reverts commit d724caced3ce21f08924d4b7801f94ce6638f6ea.
      
      * fix: Update 0020 CUDA Graphs for gemma3n to keep both llama.cpp and ollama fixes
      
      https://github.com/ollama/ollama/pull/11195#issuecomment-3137312394
      
      
      
      * fix: Sync ggml-cuda.cu after keeping both style cuda graph fixes for gemma3n
      
      * unwind mxfp4 patch
      
      Prepare to bump ggml with their impl for mxfp4
      
      * bump
      
      * fix windows build error
      
      * Convert tensors at load time
      
      Repack the mxfp4 tensors as ggmls kernels expect them to be.
      
      * convert mlp bf16 to f32
      
      * buffer the conversion better
      
      * reshape earlier
      
      * openai swiglu
      
      * add ids
      
      * split qkv, gate_up
      
      * fix nested alt tags
      
      * fast attention
      
      * remove debug messages
      
      * fix lint
      
      * remove redundant test
      
      * remap values only if source/target are different
      
      * add back i32->i32 copy
      
      * refactor cpu quants
      
      * clean up vendor
      
      * update patch instructions
      
      * clean up patches
      
      * remove webgpu
      
      * update mem
      
      * also handle gpt-oss
      
      * revert convert changes
      
      ---------
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      Co-authored-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      1a19df1f
  9. 22 May, 2025 1 commit
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
  10. 19 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Seperate tensor load from backend creation · 94ab428e
      Jesse Gross authored
      Currently, when the backend is created, the tensors are loaded at the
      same time, which is a slow operation. This separates them to be two
      steps:
       - Create backend, including enumerating tensors and memory allocation
       - Loading tensor data
      
      This allows more flexibility in managing model loading.
      94ab428e
  11. 15 May, 2025 1 commit
    • Jesse Gross's avatar
      ollamarunner: Separate text and multimodal graphs · 3c14461d
      Jesse Gross authored
      For some multimodal models (such as gemma3), we create a single
      graph that generates the image embedding and then use this in the
      text model. The embedding tensor is completely opaque to the runner.
      
      However, this doesn't work if we need to use the embedding in multiple
      batches. This can arise if the embedding is larger than the batch size.
      In these cases (as with llama4), we would like to create views that
      are more appropriately sized. However, if we do this then the original
      source tensor is used in multiple graphs, which isn't allowed. To
      avoid that problem, models with this pattern compute the embedding
      tensor on first use and recreate the individual views. There is no
      longer a single vision and text graph.
      
      This codifies the pattern of separating vision and text graphs. The
      logic of computing tensors on demand is moved to the runner, so models
      no longer have to worry about this. It also gives the runner visibility
      into the multimodal tensors, which is important for memory management.
      3c14461d
  12. 12 May, 2025 1 commit
  13. 08 Apr, 2025 1 commit
    • Jesse Gross's avatar
      ollamarunner: Preallocate worst case graph at startup · dbb149e6
      Jesse Gross authored
      Currently, the KV cache and graph are lazily allocated as needed.
      The cache is fully allocated on first use of the corresponding
      layer whereas the graph grows with the size of the context.
      
      This can be an issue if another application allocates more VRAM
      after we do our calculations - Ollama will crash in the middle of
      inference. If we instead allocate the maximum needed memory at
      startup of the runner, we will either succeed or fail at that point
      rather than at some surprising time in the future.
      
      Currently, this only generates a worst case batch for text, which
      means that vision models may get a partial allocation and continue
      to lazily allocate the rest.
      dbb149e6
  14. 03 Apr, 2025 1 commit
  15. 21 Mar, 2025 1 commit
  16. 20 Mar, 2025 2 commits
    • Jesse Gross's avatar
      model: Pass input tensor instead of raw data to models · 0fbfcf3c
      Jesse Gross authored
      Rather than directly giving the input data to models, we can
      pass a tensor instead. In the short term, this saves some duplicated
      code.
      
      Longer term, we will want to overlap setting up the next batch with
      processing of the current one. In this case, we will only have the
      shape of tensor but it will not be loaded with data at the time of
      graph generation. By passing only a tensor to models now, we set up
      this possibility and prevent them from relying on data that they won't
      have in the future.
      
      Although the same could be done for Positions and Outputs, in some
      cases we either need the raw input data or don't use them at all.
      Therefore, for now we leave them as they are and allow models to
      convert them to tensors as needed.
      0fbfcf3c
    • Jesse Gross's avatar
      input: Rename Options to Batch · 0c220935
      Jesse Gross authored
      Options is no longer very descriptive of this struct.
      0c220935
  17. 14 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ollamarunner: Use a separate context per multimodal input · 282bfaaa
      Jesse Gross authored
      Currently there is a single context per sequence, shared all by
      all multimodal inputs. Since we build a vision encoder graph per
      image, with a large number of inputs we can eventually hit the
      maximum number of graph nodes per context.
      
      This changes to use a separate context for each image, ensuring
      that available resource limits are consistent.
      282bfaaa
  18. 13 Mar, 2025 2 commits
  19. 10 Mar, 2025 1 commit
    • Jesse Gross's avatar
      model: Update encoder cache to use multimodal input processing handler · a1cda80b
      Jesse Gross authored
      The encoder cache needs to know the position of images in the input
      stream so that it knows when to delete them. Previously images didn't
      have a position, so we implied one by breaking batches before an
      image and then assuming the image was in the first position. However,
      multimodal objects are now given explicit positions in the input
      stream, so we can use that instead.
      
      Breaking batches was also a way to simulate a cross attention mask
      for mllama. However, given that it only supports a single sequence
      and a single image, this mask doesn't serve any real purpose.
      Removing the batch break does not appear to affect the quality of
      the output.
      
      Most of this is simply moving the input data structures to a new
      package to avoid import cycles.
      a1cda80b
  20. 07 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ollamarunner: Improve multimodal input handling · a7e63b82
      Jesse Gross authored
      Various vision models have different requirements for how they
      receive their inputs. For example:
       - Mllama wants images together with text and the image embeddings
         don't themselves have positions or get stored in the main KV cache
       - Llava-style models feed in embeddings similar to tokens and
         images correspond to a varying number of tokens in the cache.
      
      In addition, the strategy for providing inputs must support batching
      and multiple sequences, which are managed by the runner. At the same
      time, we want to keep data handling fully in the model so that new
      architectures are not bottlenecked by runner code which does not
      understand their particular requirements.
      
      This provides a method for models to edit the input stream so that
      it meets their needs while still being in a format that the runner
      understands. This allows the runner to avoid special processing
      for different models.
      
      In addition, this fixes a regression where non-vision models may
      try to incorrectly interpret images.
      a7e63b82
  21. 04 Mar, 2025 1 commit
    • Daniel Hiltgen's avatar
      New engine: vision models and auto-fallback (#9113) · 1fdb351c
      Daniel Hiltgen authored
      * Include unified vision layers in memory prediction
      
      For newer vision models with a single gguf, include
      the projection estimates.
      
      * Adjust CLI to handle both styles of vision model metadata
      
      * Wire up new tokenizers for new engine
      
      If we're loading the new engine, utilize the new model
      text processor instead of calling into cgo wrappers for
      llama.cpp.  This also cleans up some tech debt from the
      older tokenization flow for the C++ server which was
      no longer used.
      
      This also adjusts the grammar handling logic to pass
      through to the new engine instead of utilizing the cgo
      schema to grammar call.
      
      * Lay foundation for auto selection of new engine
      1fdb351c
  22. 27 Feb, 2025 1 commit
    • Michael Yang's avatar
      ml: update Context.Forward interface · 3e8b8a19
      Michael Yang authored
      update Context.Forward to accept multiple tensors to match
      Context.Compute signature
      
      update Context.Forward to return Context such that it can be chained
      with Context.Compute
      3e8b8a19
  23. 20 Feb, 2025 1 commit
    • Jesse Gross's avatar
      ollamarunner: Pass runner performance parameters to backends · bd6a7d5e
      Jesse Gross authored
      Currently the following parameters are in the runner but not used:
       - numGPULayers
       - mainGPU
       - threads
       - tensorSplit
      
      This passes them through to the backend, which is where they would
      actually get used. However, the GGML backend does not yet do anything
      with them.
      bd6a7d5e
  24. 15 Feb, 2025 1 commit
  25. 14 Feb, 2025 4 commits
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03
    • Jesse Gross's avatar
      model: Load tensors behind an interface · d650ad39
      Jesse Gross authored
      Currently, if a model uses an interface for its data structures (as mllama
      does) then the tensor data in the structs implementing that interface will
      not get loaded.
      d650ad39
    • Jesse Gross's avatar
      backend: Support graph computation that does not return an output · 4d4463b2
      Jesse Gross authored
      There are two cases where we may not have an output after computing:
       - Prompt processing where the length of the input exceeds the batch
         size
       - Internal memory management operations such as cache defrag and shift
      4d4463b2
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413