1. 13 Aug, 2025 1 commit
  2. 12 Aug, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Use ordinal IDs for AMD GPUs on Linux when UUID is unavailable · a343ae53
      Jesse Gross authored
      Some AMD GPUs do not provide UUIDs and report only "XX". In these
      cases, we should use the ordinal ID as an alternate identifier.
      This is the same as we always need to do on Windows for AMD.
      
      In addition, this prints out the ID for each GPU when enumerating
      them for easier debugging in the future.
      a343ae53
  3. 08 Aug, 2025 3 commits
    • Jesse Gross's avatar
      ggml: No-alloc mode · 79f6376f
      Jesse Gross authored
      Callers can set a backend buffer type to be no-alloc, meaning that
      it does not allocate memory for tensors or operations. This can
      be used for calculating memory requirements. Tensors and graphs
      must be recreated with no-alloc set to false before loading data.
      
      Defaults to false for newly created backend buffer types.
      79f6376f
    • Jesse Gross's avatar
      ggml: Support closing backends · 756c78cf
      Jesse Gross authored
      In order to iteratively find the best memory allocation, we need to
      be able to free backend memory so we can try again.
      756c78cf
    • Jesse Gross's avatar
      ggml: Use GGML's typedef'ed pointer types · d7f4f788
      Jesse Gross authored
      For many backend data structures, GGML defines a typedef of a pointer
      type and returns these from functions. In most cases, CGo understands
      that these are interchangable but some parts of Go (such as generics)
      think they are two different types. We should prefer the form that
      GGML uses.
      d7f4f788
  4. 06 Aug, 2025 1 commit
  5. 05 Aug, 2025 1 commit
    • Michael Yang's avatar
      gpt-oss (#11672) · fa7776fd
      Michael Yang authored
      * bf16
      
      * tests
      
      * gpt-oss
      
      * enable gptoss for engine
      
      * rough estimate
      
      * convert to mxfp4
      
      * handle safetensors U8
      
      * clamp glu/linear
      
      * update tokenizer
      
      * MXFP4 support
      
      This implements the Open Compute Microscaling (MX) FP4 format
      as a tensor type with backend implementations focusing
      on mulmat and mulmatid on CPU, CUDA, and Metal.
      
      * Unit tests for MXFP4 support
      
      This exercises various operations and shapes on both CPU and GPU (if detected
      on the system)
      
      * cuda graph
      
      * unit test adjustments
      
      * cuda: optimize memory access
      
      Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4
      
      * mac: fix crash on old macos versions
      
      cblas_sgemm is only supported on v13.3 and up, however bf16 is
      only supported on v14+ so we were falling back to ggml-blas and
      crashing on bf16 tensors.  Checking for the function being null
      seems to be the simplest way to condittionally avoid registering the
      backend.
      
      * server: Minimum context length for gptoss
      
      This model requires a minimum context ...
      fa7776fd
  6. 30 Jul, 2025 1 commit
  7. 29 Jul, 2025 1 commit
  8. 17 Jul, 2025 1 commit
  9. 11 Jul, 2025 2 commits
    • Jesse Gross's avatar
      ggml: Use assigned layers when reporting loading stats · acef9b4c
      Jesse Gross authored
      Reporting params.NumGPULayers can be misleading because it is the
      requested number of layers, not the actual number that is loaded.
      While they are often the same, there are cases where they might mismatch,
      such as if the GPU backend is missing.
      acef9b4c
    • Jesse Gross's avatar
      ggml: Disable unused pipeline parallelism · 9a43994c
      Jesse Gross authored
      We're not currently using it, even in cases where we could. Disabling
      it improves generation performance by 10-30% with multiple GPUs.
      9a43994c
  10. 09 Jul, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report ordinal IDs for AMD GPUs on Windows · 35fda7b4
      Jesse Gross authored
      We don't get valid UUIDs for AMD GPUs on Windows, so the best option
      is to use the ordinal IDs. This brings us in line with what we currently
      do on the Ollama server - the only exception is AMD GPUs on Linux, which
      falls back to using ordinal IDs. The GGML implementation has no fallback
      but it doesn't appear to occur for any of the GPUs that we support.
      
      It's also possible that there are collisions between ordinal IDs for
      different libraries - however the only places where we use them are
      AMD on Windows and Metal on Mac, which can never occur on the same
      system.
      35fda7b4
  11. 07 Jul, 2025 1 commit
  12. 02 Jul, 2025 1 commit
  13. 27 Jun, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Temporarily disable reporting UUIDs · 45f216a9
      Jesse Gross authored
      This is causing segfaults, so disable it. Currently UUIDs are only
      used for debugging purposes, although they planned to be used in
      additional ways in the future.
      
      Bug #11211
      45f216a9
  14. 26 Jun, 2025 1 commit
  15. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  16. 20 Jun, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Check return status for computation. · 87b7af6c
      Jesse Gross authored
      We don't check the return status after computing the graph, which
      can silently lead to bad outputs if we try to keep going and future
      computation succeeds. This appears to happens in certain cases on
      Apple M2 devices.
      
      Fixes #11070
      87b7af6c
  17. 18 Jun, 2025 2 commits
  18. 29 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Export GPU UUIDs · aaa78180
      Jesse Gross authored
      This enables matching up devices and information reported by the backend
      with system management libraries such as nvml to get accurate free
      memory reporting.
      aaa78180
  19. 24 May, 2025 1 commit
  20. 22 May, 2025 3 commits
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
    • Jesse Gross's avatar
      ollamarunner: Memory usage reporting · 73d6a82c
      Jesse Gross authored
      This provides granular information about the backend memory allocations
      required by the runner:
       - Per backend
       - Per layer
       - Weights, cache and graph
       - Allocation status
      
      This can be used for debugging and validating memory estimates.
      73d6a82c
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
  21. 21 May, 2025 2 commits
  22. 20 May, 2025 1 commit
  23. 19 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Seperate tensor load from backend creation · 94ab428e
      Jesse Gross authored
      Currently, when the backend is created, the tensors are loaded at the
      same time, which is a slow operation. This separates them to be two
      steps:
       - Create backend, including enumerating tensors and memory allocation
       - Loading tensor data
      
      This allows more flexibility in managing model loading.
      94ab428e
  24. 15 May, 2025 1 commit
  25. 14 May, 2025 2 commits
  26. 12 May, 2025 2 commits
  27. 10 May, 2025 1 commit
  28. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  29. 05 May, 2025 1 commit
  30. 02 May, 2025 2 commits
    • Jesse Gross's avatar
      ggml: Fix race that resulted in "context canceled" when loading · a6ef73f4
      Jesse Gross authored
      Successfully completing processing with an errgroup cancels the
      associated context. However, we also have a goroutine that is checking
      for cancelation of the context. As a result, there is a race where
      the goroutine can pick up the cancelation and report an error,
      replacing the sucessful error message.
      
      To avoid that, this replaces the goroutine with a cancelation check
      when we are reading files. This also has the advantage of stopping
      all reads relatively quickly on error and also ensuring that there are
      no outstanding I/O operations when we return in this case.
      
      The downside is that if a file read blocks forever (for example, over
      the network) then cancelation of the context effectively won't be
      honored. However, this is also true for other smaller files we read
      and the tensors are read in small chunks (128K), so it's consistent
      and better on balance overall.
      a6ef73f4
    • Jesse Gross's avatar
      ollamarunner: Re-enable worst case graph preallocation. · c2f5d666
      Jesse Gross authored
      Worst case graph preallocation was disabled by a27462b7
      "ollamarunner: Temporarily disable worst case graph preallocation"
      since it caused crashes with large batches when not using the GPU.
      
      This backports upstream llama.cpp commit f057808
      "ggml: Don't assert fail when tensor data changes (#13222)", which
      fixes the underlying bug and allows reverting the previous workaround.
      c2f5d666