"packaging/vscode:/vscode.git/clone" did not exist on "b9a1984cc98fd72b93ab340e4bdd9c33bdbe326b"
  1. 29 Aug, 2025 1 commit
    • Daniel Hiltgen's avatar
      perf: build graph for next batch async to keep GPU busy (#11863) · 517807cd
      Daniel Hiltgen authored
      * perf: build graph for next batch in parallel to keep GPU busy
      
      This refactors the main run loop of the ollama runner to perform the main GPU
      intensive tasks (Compute+Floats) in a go routine so we can prepare the next
      batch in parallel to reduce the amount of time the GPU stalls waiting for the
      next batch of work.
      
      * tests: tune integration tests for ollama engine
      
      This tunes the integration tests to focus more on models supported
      by the new engine.
      517807cd
  2. 27 Aug, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Avoid allocating CUDA primary context on unused GPUs · 9d97e6a9
      Jesse Gross authored
      The recent memory management changes caused all GPUs to be visible
      to the runner, regardless of whether they are ultimately used. This
      caused CUDA devices to allocate a primary context (~300 MB VRAM) on
      each GPU, for each model. This is unnecessary, so we can both avoid
      touching GPUs that we exclude in the early stage of allocation and
      freeing the memory for any that we touch but don't use.
      
      The issue will continue to exist for the old engine, since it touches
      all devices during initialization.
      9d97e6a9
  3. 26 Aug, 2025 1 commit
  4. 19 Aug, 2025 1 commit
    • Jesse Gross's avatar
      kvcache: Use Cast instead of Copy for flash attention masks · 05ccb17c
      Jesse Gross authored
      Flash attention kernels require the mask of the KV cache be a F16
      rather than an F32. We can use the GGML operation ggml_cast to do
      this rather than doing it ourselves, which allows reuse of a
      preallocated buffer in the graph rather than allocating a new one
      for each batch. This improves token generation performance with
      flash attention by 10-30% (with gpt-oss). This also makes performance
      with flash attention better than without it, as expected.
      05ccb17c
  5. 14 Aug, 2025 3 commits
    • Daniel Hiltgen's avatar
      6eaf194b
    • Jesse Gross's avatar
      llm: New memory management · d5a0d8d9
      Jesse Gross authored
      This changes the memory allocation strategy from upfront estimation to
      tracking actual allocations done by the engine and reacting to that. The
      goal is avoid issues caused by both under-estimation (crashing) and
      over-estimation (low performance due to under-utilized GPUs).
      
      It is currently opt-in and can be enabled for models running on the
      Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
      cases is unchanged and will continue to use the existing estimates.
      d5a0d8d9
    • Michael Yang's avatar
      update vendored llama.cpp and ggml (#11823) · 1a19df1f
      Michael Yang authored
      * TEMPORARY: Update the llama.cpp upstream to my fork's Granite Four branch
      
      This will be redone once my branch is merged upstream in llama.cpp
      
      * feat: Update all patches
      
      There are a number that are no longer needed at all:
      
      - 0003-embeddings: Embeddings entirely overhauled on master
      - 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
          overhauled on master
      - 0019-metal-add-mean-kernel-14267: Merged upstream
      - 0020-CUDA-add-mean-operation-14313: Merged upstream
      
      * feat: Sync llama.cpp and ggml
      
      * fix: Update rsync-filter for all moved/new/removed files
      
      * fix: Add files missing from sync
      
      * fix: Update ggml rsync-filter for new ggml-cpu/arch subdirs
      
      * fix: Add ggml files missing from sync
      
      * fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files
      
      * fix: Remove mtmd main cpp files
      
      * fix: Add missing include in sampling_ext.cpp
      
      * fix: Update llama.go to use mtmd instead of clip/llava
      
      * fix: Add patch for mtmd_input_text
      
      * chore: Ignore *.patched in the patch directory
      
      * fix: Fix support for arch-specific ggml-cpu source files with new arrangement
      
      In https://github.com/ggml-org/llama.cpp/pull/13892, all arch-specific
      implementations were split out into a nested tree structure under
      ggml-cpu/arch. This conflicts with standard CGO layout where all
      arch-specific source files are expected to live in the same directory as
      the parent go module and use suffixes based on GOOS and GOARCH. As such,
      there were really two options for getting this to work:
      
      1. Add a patch on top of the GGML sync to rearrange the files to match the
      GO layout convention
      2. Use CGO directives to conditionally include the nested source files in
      the compilation units
      
      This commit does (2) in order to minimize the set of changes needed on top
      of the upstream file layout. To get this to work, there are two key things
      needed:
      
      1. In cpu.go, #cgo directives are added to explicitly set __${GOARCH}__ in
      the preprocessor directives
      2. In arch-impls.c|cpp, use an #ifdef | #elif defined | #endif chain to
      explicitly include the .c|.cpp files for the given architecture from the
      nested directory
      
      * fix: Use mtmd_helper to correctly load the bitmap for the image
      
      * fix: Apply patch for mtmd_text_input
      
      * fix: Add missing stb to llama.cpp rsync-filter
      
      * fix: Add sync'ed stb vendored header
      
      * fix: Use c++17 and include vendor for go wrapper modules
      
      * fix: Update patch 0015 for upstream implementation of uuid
      
      * feat: Bump to the latest tip of the branch
      
      * fix: Update patches for bump
      
      * feat: Bump back to the cenral repo and point at the latest master
      
      This includes granite 4 and a number of other model architectures!
      
      * fix: Revert changes to ggml export GPU UUID patch
      
      * fix: Add patch for GGML_VERSION and GGML_COMMIT constants
      
      * feat: Sync all patched code
      
      * build: Include cmake/common.cmake in ggml sync
      
      * build: Add top-level include for GNUINstallDirs in CMakeLists.txt
      
      This is used to populate CMAKE_INSTALL_BINDIR
      
      * fix: Add a patch to avoid power throttling API on non-msvc windows builds
      
      * fix: Sync patch changes for ggml-cpu.c
      
      * feat: Bump llama.cpp to 4a4f42
      
      This picks up support for Kimi K2 and PLaMO-2
      
      * feat: Sync llama.cpp
      
      * fix: Handle multi-chunk image encodings from mtmd
      
      * fix: Re-number patches after merge with `main`
      
      * feat: Bump to 41e78c in the makefile
      
      * fix: Fix Solar and argsort/copy patches after bump
      
      * fix: Remove Gemma3n CUDA Graphs patch
      
      It was implemented upstream:
      https://github.com/ggml-org/llama.cpp/pull/14741
      
      * feat: Sync llama.cpp / ggml after latest bump
      
      * build: Remove unnecessary CFLAGS definitions in cpu.go
      
      * fix: Remove unnecessary additions in the rsync-filter
      
      * fix: Remove unused vendored code for chat template parsing
      
      * Revert "fix: Remove Gemma3n CUDA Graphs patch"
      
      This reverts commit d724caced3ce21f08924d4b7801f94ce6638f6ea.
      
      * fix: Update 0020 CUDA Graphs for gemma3n to keep both llama.cpp and ollama fixes
      
      https://github.com/ollama/ollama/pull/11195#issuecomment-3137312394
      
      
      
      * fix: Sync ggml-cuda.cu after keeping both style cuda graph fixes for gemma3n
      
      * unwind mxfp4 patch
      
      Prepare to bump ggml with their impl for mxfp4
      
      * bump
      
      * fix windows build error
      
      * Convert tensors at load time
      
      Repack the mxfp4 tensors as ggmls kernels expect them to be.
      
      * convert mlp bf16 to f32
      
      * buffer the conversion better
      
      * reshape earlier
      
      * openai swiglu
      
      * add ids
      
      * split qkv, gate_up
      
      * fix nested alt tags
      
      * fast attention
      
      * remove debug messages
      
      * fix lint
      
      * remove redundant test
      
      * remap values only if source/target are different
      
      * add back i32->i32 copy
      
      * refactor cpu quants
      
      * clean up vendor
      
      * update patch instructions
      
      * clean up patches
      
      * remove webgpu
      
      * update mem
      
      * also handle gpt-oss
      
      * revert convert changes
      
      ---------
      Signed-off-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      Co-authored-by: default avatarGabe Goodhart <ghart@us.ibm.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      1a19df1f
  6. 13 Aug, 2025 1 commit
  7. 12 Aug, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Use ordinal IDs for AMD GPUs on Linux when UUID is unavailable · a343ae53
      Jesse Gross authored
      Some AMD GPUs do not provide UUIDs and report only "XX". In these
      cases, we should use the ordinal ID as an alternate identifier.
      This is the same as we always need to do on Windows for AMD.
      
      In addition, this prints out the ID for each GPU when enumerating
      them for easier debugging in the future.
      a343ae53
  8. 08 Aug, 2025 3 commits
    • Jesse Gross's avatar
      ggml: No-alloc mode · 79f6376f
      Jesse Gross authored
      Callers can set a backend buffer type to be no-alloc, meaning that
      it does not allocate memory for tensors or operations. This can
      be used for calculating memory requirements. Tensors and graphs
      must be recreated with no-alloc set to false before loading data.
      
      Defaults to false for newly created backend buffer types.
      79f6376f
    • Jesse Gross's avatar
      ggml: Support closing backends · 756c78cf
      Jesse Gross authored
      In order to iteratively find the best memory allocation, we need to
      be able to free backend memory so we can try again.
      756c78cf
    • Jesse Gross's avatar
      ggml: Use GGML's typedef'ed pointer types · d7f4f788
      Jesse Gross authored
      For many backend data structures, GGML defines a typedef of a pointer
      type and returns these from functions. In most cases, CGo understands
      that these are interchangable but some parts of Go (such as generics)
      think they are two different types. We should prefer the form that
      GGML uses.
      d7f4f788
  9. 06 Aug, 2025 1 commit
  10. 05 Aug, 2025 1 commit
    • Michael Yang's avatar
      gpt-oss (#11672) · fa7776fd
      Michael Yang authored
      
      
      * bf16
      
      * tests
      
      * gpt-oss
      
      * enable gptoss for engine
      
      * rough estimate
      
      * convert to mxfp4
      
      * handle safetensors U8
      
      * clamp glu/linear
      
      * update tokenizer
      
      * MXFP4 support
      
      This implements the Open Compute Microscaling (MX) FP4 format
      as a tensor type with backend implementations focusing
      on mulmat and mulmatid on CPU, CUDA, and Metal.
      
      * Unit tests for MXFP4 support
      
      This exercises various operations and shapes on both CPU and GPU (if detected
      on the system)
      
      * cuda graph
      
      * unit test adjustments
      
      * cuda: optimize memory access
      
      Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4
      
      * mac: fix crash on old macos versions
      
      cblas_sgemm is only supported on v13.3 and up, however bf16 is
      only supported on v14+ so we were falling back to ggml-blas and
      crashing on bf16 tensors.  Checking for the function being null
      seems to be the simplest way to condittionally avoid registering the
      backend.
      
      * server: Minimum context length for gptoss
      
      This model requires a minimum context length of 8192 to function
      effectively. Users can set higher values through all normal mechanisms
      but lower values will be silently reset.
      
      * ggml: Multiply by numParallel for gptoss sliding window
      
      When computing the graph size estimate, the context size is already
      multiplied by numParallel so estimates reflect that. However, since
      sliding window models use a smaller, fixed context size, they need
      to manually take numParallel into account.
      
      * gpt-oss integration
      
      includes harmony parser and thinking levels, etc.
      
      * fix sync
      
      * fix tests
      
      * fix lint
      
      ---------
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      Co-authored-by: default avatarJesse Gross <jesse@ollama.com>
      Co-authored-by: default avatarDevon Rifkin <drifkin@drifkin.net>
      fa7776fd
  11. 30 Jul, 2025 1 commit
  12. 29 Jul, 2025 1 commit
  13. 17 Jul, 2025 1 commit
  14. 11 Jul, 2025 2 commits
    • Jesse Gross's avatar
      ggml: Use assigned layers when reporting loading stats · acef9b4c
      Jesse Gross authored
      Reporting params.NumGPULayers can be misleading because it is the
      requested number of layers, not the actual number that is loaded.
      While they are often the same, there are cases where they might mismatch,
      such as if the GPU backend is missing.
      acef9b4c
    • Jesse Gross's avatar
      ggml: Disable unused pipeline parallelism · 9a43994c
      Jesse Gross authored
      We're not currently using it, even in cases where we could. Disabling
      it improves generation performance by 10-30% with multiple GPUs.
      9a43994c
  15. 09 Jul, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report ordinal IDs for AMD GPUs on Windows · 35fda7b4
      Jesse Gross authored
      We don't get valid UUIDs for AMD GPUs on Windows, so the best option
      is to use the ordinal IDs. This brings us in line with what we currently
      do on the Ollama server - the only exception is AMD GPUs on Linux, which
      falls back to using ordinal IDs. The GGML implementation has no fallback
      but it doesn't appear to occur for any of the GPUs that we support.
      
      It's also possible that there are collisions between ordinal IDs for
      different libraries - however the only places where we use them are
      AMD on Windows and Metal on Mac, which can never occur on the same
      system.
      35fda7b4
  16. 07 Jul, 2025 1 commit
  17. 02 Jul, 2025 1 commit
  18. 27 Jun, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Temporarily disable reporting UUIDs · 45f216a9
      Jesse Gross authored
      This is causing segfaults, so disable it. Currently UUIDs are only
      used for debugging purposes, although they planned to be used in
      additional ways in the future.
      
      Bug #11211
      45f216a9
  19. 26 Jun, 2025 1 commit
  20. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  21. 20 Jun, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Check return status for computation. · 87b7af6c
      Jesse Gross authored
      We don't check the return status after computing the graph, which
      can silently lead to bad outputs if we try to keep going and future
      computation succeeds. This appears to happens in certain cases on
      Apple M2 devices.
      
      Fixes #11070
      87b7af6c
  22. 18 Jun, 2025 2 commits
  23. 29 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Export GPU UUIDs · aaa78180
      Jesse Gross authored
      This enables matching up devices and information reported by the backend
      with system management libraries such as nvml to get accurate free
      memory reporting.
      aaa78180
  24. 24 May, 2025 1 commit
  25. 22 May, 2025 3 commits
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
    • Jesse Gross's avatar
      ollamarunner: Memory usage reporting · 73d6a82c
      Jesse Gross authored
      This provides granular information about the backend memory allocations
      required by the runner:
       - Per backend
       - Per layer
       - Weights, cache and graph
       - Allocation status
      
      This can be used for debugging and validating memory estimates.
      73d6a82c
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
  26. 21 May, 2025 2 commits
  27. 20 May, 2025 1 commit
  28. 19 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Seperate tensor load from backend creation · 94ab428e
      Jesse Gross authored
      Currently, when the backend is created, the tensors are loaded at the
      same time, which is a slow operation. This separates them to be two
      steps:
       - Create backend, including enumerating tensors and memory allocation
       - Loading tensor data
      
      This allows more flexibility in managing model loading.
      94ab428e
  29. 15 May, 2025 1 commit
  30. 14 May, 2025 2 commits