1. 04 Aug, 2025 2 commits
    • Michael Yang's avatar
      cuda graph · e6f39bce
      Michael Yang authored
      e6f39bce
    • Daniel Hiltgen's avatar
      MXFP4 support · 4fb47ed3
      Daniel Hiltgen authored
      This implements the Open Compute Microscaling (MX) FP4 format
      as a tensor type with backend implementations focusing
      on mulmat and mulmatid on CPU, CUDA, and Metal.
      4fb47ed3
  2. 30 Jul, 2025 1 commit
  3. 29 Jul, 2025 1 commit
  4. 09 Jul, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report ordinal IDs for AMD GPUs on Windows · 35fda7b4
      Jesse Gross authored
      We don't get valid UUIDs for AMD GPUs on Windows, so the best option
      is to use the ordinal IDs. This brings us in line with what we currently
      do on the Ollama server - the only exception is AMD GPUs on Linux, which
      falls back to using ordinal IDs. The GGML implementation has no fallback
      but it doesn't appear to occur for any of the GPUs that we support.
      
      It's also possible that there are collisions between ordinal IDs for
      different libraries - however the only places where we use them are
      AMD on Windows and Metal on Mac, which can never occur on the same
      system.
      35fda7b4
  5. 26 Jun, 2025 1 commit
  6. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  7. 18 Jun, 2025 2 commits
  8. 29 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Export GPU UUIDs · aaa78180
      Jesse Gross authored
      This enables matching up devices and information reported by the backend
      with system management libraries such as nvml to get accurate free
      memory reporting.
      aaa78180
  9. 22 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
  10. 14 May, 2025 2 commits
  11. 13 May, 2025 2 commits
  12. 12 May, 2025 1 commit
  13. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  14. 02 May, 2025 2 commits
  15. 25 Apr, 2025 1 commit
  16. 24 Apr, 2025 1 commit
  17. 17 Apr, 2025 1 commit
  18. 16 Apr, 2025 1 commit
  19. 15 Apr, 2025 1 commit
  20. 03 Apr, 2025 1 commit
    • Bruce MacDonald's avatar
      model: support for mistral-small in the ollama runner · 6bd0a983
      Bruce MacDonald authored
      Mistral is a popular research lab making open source models. This updates
      the forward pass of llama architecture models to support both llama models
      and mistral models by accounting for additional metadata present in mistral
      models, and finding the correct dimensions for the output projection.
      6bd0a983
  21. 27 Mar, 2025 1 commit
  22. 15 Mar, 2025 1 commit
  23. 11 Mar, 2025 1 commit
  24. 07 Mar, 2025 1 commit
  25. 03 Mar, 2025 1 commit
  26. 28 Feb, 2025 1 commit
  27. 27 Feb, 2025 1 commit
  28. 24 Feb, 2025 1 commit
  29. 20 Feb, 2025 1 commit
  30. 19 Feb, 2025 1 commit
  31. 18 Feb, 2025 1 commit
    • Michael Yang's avatar
      build: remove backend build for sapphirerapids · 5f8c0318
      Michael Yang authored
      sapphire rapids has amx support but it ends up having a negative
      performance impact.
      
      emerald rapids also has amx support with a positive performance impact
      however there's no reasonable way in ggml to differentiate between the
      two. the impact is small (~6%) so disable amx entirely for simplicity
      5f8c0318
  32. 14 Feb, 2025 1 commit
  33. 11 Feb, 2025 1 commit
  34. 10 Feb, 2025 1 commit
  35. 05 Feb, 2025 1 commit