1. 14 Aug, 2025 3 commits
    • Michael Yang's avatar
      update vendored llama.cpp and ggml (#11823) · 1a19df1f
      Michael Yang authored
      * TEMPORARY: Update the llama.cpp upstream to my fork's Granite Four branch
      
      This will be redone once my branch is merged upstream in llama.cpp
      
      * feat: Update all patches
      
      There are a number that are no longer needed at all:
      
      - 0003-embeddings: Embeddings entirely overhauled on master
      - 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
          overhauled on master
      - 0019-metal-add-mean-kernel-14267: Merged upstream
      - 0020-CUDA-add-mean-operation-14313: Merged upstream
      
      * feat: Sync llama.cpp and ggml
      
      * fix: Update rsync-filter for all moved/new/removed files
      
      * fix: Add files missing from sync
      
      * fix: Update ggml rsync-filter for new ggml-cpu/arch subdirs
      
      * fix: Add ggml files missing from sync
      
      * fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files
      
      * fix: Remove mtmd main cpp files
      
      * fix: Add missing include in sampling_ext.cpp
      
      * fix: Update llama.go to use mtmd instead of clip/llava
      
      * fix: Add patch for mtmd_input_text
      
      * chore: Ignore *.patched in the patch directory
      
      * fix: Fix support for arch-specific ggml-cpu source files with new arrangement
      
      In https://github.com/ggml-org/llama.cpp/pull/13892, all arch-specific
      implementations were split out into a nested tree structure under
      ggml-cpu/arch. This conflicts with standard CGO layout where all
      arch-specific source files are expected to live in the same directory as
      the parent go module and use suffixes based on GOOS and GOARCH. As such,
      there were really two options for getting this to work:
      
      1. Add a patch on top of the GGML sync to rearrange the files to match the
      GO layout convention
      2. Use CGO directives to conditionally include the nested source files in
      the compilation units
      
      This commit does (2) in order to minimize the set of changes needed on top
      of the upstream file layout. To get this to work, there are two key things
      needed:
      
      1. In cpu.go, #cgo directives are added to explicitly set __${GOARCH}__ in
      the preprocessor directives
      2. In arch-impls.c|cpp, use an #ifdef | #elif defined | #endif chain to
      explicitly include the .c|.cpp files for the given architecture from the
      nested directory
      
      * fix: Use mtmd_helper to correctly load the bitmap for the image
      
      * fix: Apply patch for mtmd_text_input
      
      * fix: Add missing stb to llama.cpp rsync-filter
      
      * fix: Add sync'ed stb vendored header
      
      * fix: Use c++17 and include vendor for go wrapper modules
      
      * fix: Update patch 0015 for upstream implementation of uuid
      
      * feat: Bump to the latest tip of the branch
      
      * fix: Update patches for bump
      
      * feat: Bump back to the cenral repo and point at the latest master
      
      This includes granite 4 and a number of other model architectures!
      
      * fix: Revert changes to ggml export GPU UUID patch
      
      * fix: Add patch for GGML_VERSION and GGML_COMMIT constants
      
      * feat: Sync all patched code
      
      * build: Include cmake/common.cmake in ggml sync
      
      * build: Add top-level include for GNUINstallDirs in CMakeLists.txt
      
      This is used to populate CMAKE_INSTALL_BINDIR
      
      * fix: Add a patch to avoid power throttling API on non-msvc windows builds
      
      * fix: Sync patch changes for ggml-cpu.c
      
      * feat: Bump llama.cpp to 4a4f42
      
      This picks up support for Kimi K2 and PLaMO-2
      
      * feat: Sync llama.cpp
      
      * fix: Handle multi-chunk image encodings from mtmd
      
      * fix: Re-number patches after merge with `main`
      
      * feat: Bump to 41e78c in the makefile
      
      * fix: Fix Solar and argsort/copy patches after bump
      
      * fix: Remove Gemma3n CUDA Graphs patch
      
      It was implemented upstream:
      https://github.com/ggml-org/llama.cpp/pull/14741
      
      * feat: Sync llama.cpp / ggml after latest bump
      
      * build: Remove unnecessary CFLAGS definitions in cpu.go
      
      * fix: Remove unnecessary additions in the rsync-filter
      
      * fix: Remove unused vendored code for chat template parsing
      
      * Revert "fix: Remove Gemma3n CUDA Graphs patch"
      
      This reverts commit d724caced3ce21f08924d4b7801f94ce6638f6ea.
      
      * fix: Update 0020 CUDA Graphs for gemma3n to keep both llama.cpp and ollama fixes
      
      https://github.com/ollama/ollama/pull/11195#issuecomment-3137312394
      
      
      
      * fix: Sync ggml-cuda.cu after keeping both style cuda graph fixes for gemma3n
      
      * unwind mxfp4 patch
      
      Prepare to bump ggml with their impl for mxfp4
      
      * bump
      
      * fix windows build error
      
      * Convert tensors at load time
      
      Repack the mxfp4 tensors as ggmls kernels expect them to be.
      
      * convert mlp bf16 to f32
      
      * buffer the conversion better
      
      * reshape earlier
      
      * openai swiglu
      
      * add ids
      
      * split qkv, gate_up
      
      * fix nested alt tags
      
      * fast attention
      
      * remove debug messages
      
      * fix lint
      
      * remove redundant test
      
      * remap values only if source/target are different
      
      * add back i32->i32 copy
      
      * refactor cpu quants
      
      * clean up vendor
      
      * update patch instructions
      
      * clean up patches
      
      * remove webgpu
      
      * update mem
      
      * also handle gpt-oss
      
      * revert convert changes
      
      ---------
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      Co-authored-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      1a19df1f
    • Daniel Hiltgen's avatar
      doc: clarify both rocm and main bundle necessary (#11900) · 7ccfd97a
      Daniel Hiltgen authored
      Some users expect the rocm bundles to be self-sufficient, but are designed to be additive.
      7ccfd97a
    • Daniel Hiltgen's avatar
      test: add valid responses (#11902) · c385ca86
      Daniel Hiltgen authored
      some of the new models need a few more valid responses to pass
      c385ca86
  2. 13 Aug, 2025 4 commits
  3. 12 Aug, 2025 2 commits
  4. 11 Aug, 2025 4 commits
  5. 10 Aug, 2025 1 commit
  6. 08 Aug, 2025 3 commits
    • Jesse Gross's avatar
      ggml: No-alloc mode · 79f6376f
      Jesse Gross authored
      Callers can set a backend buffer type to be no-alloc, meaning that
      it does not allocate memory for tensors or operations. This can
      be used for calculating memory requirements. Tensors and graphs
      must be recreated with no-alloc set to false before loading data.
      
      Defaults to false for newly created backend buffer types.
      79f6376f
    • Jesse Gross's avatar
      ggml: Support closing backends · 756c78cf
      Jesse Gross authored
      In order to iteratively find the best memory allocation, we need to
      be able to free backend memory so we can try again.
      756c78cf
    • Jesse Gross's avatar
      ggml: Use GGML's typedef'ed pointer types · d7f4f788
      Jesse Gross authored
      For many backend data structures, GGML defines a typedef of a pointer
      type and returns these from functions. In most cases, CGo understands
      that these are interchangable but some parts of Go (such as generics)
      think they are two different types. We should prefer the form that
      GGML uses.
      d7f4f788
  7. 07 Aug, 2025 6 commits
  8. 06 Aug, 2025 7 commits
  9. 05 Aug, 2025 6 commits
    • Devon Rifkin's avatar
      tools: support anyOf types · 30f8a68c
      Devon Rifkin authored
      afaik gpt-oss is the first model that meaningfully transforms tool
      function definitions in its template. We found that relatively common
      definitions that include `anyOf` were not working because the template
      was assuming that types were always defined via a `type` field.
      
      anyOf allows for fully recursive types, so I exposed a
      `toTypeScriptType()` function to handle this recursive logic in go and
      keep the templates cleaner. The gpt-oss templates will need to be
      updated to use this.
      
      We should keep building out our function definition support to more
      fully support the parts of json schema that make sense for this use
      case, but in the meantime this will unblock some users (e.g., zed's
      ollama integration w/ gpt-oss). Probably the most urgent is proper array
      support
      30f8a68c
    • Daniel Hiltgen's avatar
      win: static link msvc libs (#11612) · e378e334
      Daniel Hiltgen authored
      This should help reduce the runtime dependencies on windows.
      e378e334
    • Michael Yang's avatar
      gptoss: fix memory calc (#11700) · fcec04bf
      Michael Yang authored
      fcec04bf
    • Jeffrey Morgan's avatar
      docs: add docs for Ollama Turbo (#11687) · ee92ca3e
      Jeffrey Morgan authored
      ee92ca3e
    • Jesse Gross's avatar
      ggml: Prevent kv cache quanitization on gpt-oss · 8253ad4d
      Jesse Gross authored
      KV cache quantization has a dependency on the flash attention kernel.
      We currently cannot use flash attention with gpt-oss as it requires
      additional operations.
      
      The model definition does not call flash attention, so it works
      regardless of the setting but the cache will pick up the
      quantization type. This updates the flash attention setting earlier
      in the loading flow so that all downstream settings are also set correctly.
      
      Fixes: #11671
      8253ad4d
    • Michael Yang's avatar
      gpt-oss (#11672) · fa7776fd
      Michael Yang authored
      
      
      * bf16
      
      * tests
      
      * gpt-oss
      
      * enable gptoss for engine
      
      * rough estimate
      
      * convert to mxfp4
      
      * handle safetensors U8
      
      * clamp glu/linear
      
      * update tokenizer
      
      * MXFP4 support
      
      This implements the Open Compute Microscaling (MX) FP4 format
      as a tensor type with backend implementations focusing
      on mulmat and mulmatid on CPU, CUDA, and Metal.
      
      * Unit tests for MXFP4 support
      
      This exercises various operations and shapes on both CPU and GPU (if detected
      on the system)
      
      * cuda graph
      
      * unit test adjustments
      
      * cuda: optimize memory access
      
      Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4
      
      * mac: fix crash on old macos versions
      
      cblas_sgemm is only supported on v13.3 and up, however bf16 is
      only supported on v14+ so we were falling back to ggml-blas and
      crashing on bf16 tensors.  Checking for the function being null
      seems to be the simplest way to condittionally avoid registering the
      backend.
      
      * server: Minimum context length for gptoss
      
      This model requires a minimum context length of 8192 to function
      effectively. Users can set higher values through all normal mechanisms
      but lower values will be silently reset.
      
      * ggml: Multiply by numParallel for gptoss sliding window
      
      When computing the graph size estimate, the context size is already
      multiplied by numParallel so estimates reflect that. However, since
      sliding window models use a smaller, fixed context size, they need
      to manually take numParallel into account.
      
      * gpt-oss integration
      
      includes harmony parser and thinking levels, etc.
      
      * fix sync
      
      * fix tests
      
      * fix lint
      
      ---------
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDevon Rifkin <drifkin@drifkin.net>
      fa7776fd
  10. 04 Aug, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Log contents of cache when unable to find a slot · 0d38b665
      Jesse Gross authored
      There is a bug when using sliding window attention where we run
      out of KV cache slots. This is likely due to not correctly removing
      all of the entries as they slide out of range. This adds additional
      logging when this occurs to track down the source.
      
      Bug #10127
      0d38b665
  11. 31 Jul, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Enable SWA to retain additional entries · 4183bb05
      Jesse Gross authored
      Models that use sliding window attention can only resume a sequence
      from the cache if it falls within the saved windows. This works well
      if the next message picks up where the old one left off. However, it
      generally prevents a partial prefix match unless the entire conversation
      falls within the sliding window.
      
      This can be a problem with reasoning models where the traces are
      supposed to be removed from future messages, forcing the entire
      history to be re-evaluated.
      
      This change allows models to specify that a larger amount of the
      history be retained in memory, to allow more partial resumption.
      It still respects the window that the model was trained on for
      token generation.
      4183bb05
  12. 30 Jul, 2025 2 commits