1. 09 Jul, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report ordinal IDs for AMD GPUs on Windows · 35fda7b4
      Jesse Gross authored
      We don't get valid UUIDs for AMD GPUs on Windows, so the best option
      is to use the ordinal IDs. This brings us in line with what we currently
      do on the Ollama server - the only exception is AMD GPUs on Linux, which
      falls back to using ordinal IDs. The GGML implementation has no fallback
      but it doesn't appear to occur for any of the GPUs that we support.
      
      It's also possible that there are collisions between ordinal IDs for
      different libraries - however the only places where we use them are
      AMD on Windows and Metal on Mac, which can never occur on the same
      system.
      35fda7b4
  2. 26 Jun, 2025 1 commit
  3. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  4. 18 Jun, 2025 2 commits
  5. 29 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Export GPU UUIDs · aaa78180
      Jesse Gross authored
      This enables matching up devices and information reported by the backend
      with system management libraries such as nvml to get accurate free
      memory reporting.
      aaa78180
  6. 22 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
  7. 14 May, 2025 2 commits
  8. 13 May, 2025 2 commits
  9. 12 May, 2025 1 commit
  10. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  11. 02 May, 2025 2 commits
  12. 25 Apr, 2025 1 commit
  13. 24 Apr, 2025 1 commit
  14. 17 Apr, 2025 1 commit
  15. 16 Apr, 2025 1 commit
  16. 15 Apr, 2025 1 commit
  17. 03 Apr, 2025 1 commit
    • Bruce MacDonald's avatar
      model: support for mistral-small in the ollama runner · 6bd0a983
      Bruce MacDonald authored
      Mistral is a popular research lab making open source models. This updates
      the forward pass of llama architecture models to support both llama models
      and mistral models by accounting for additional metadata present in mistral
      models, and finding the correct dimensions for the output projection.
      6bd0a983
  18. 27 Mar, 2025 1 commit
  19. 15 Mar, 2025 1 commit
  20. 11 Mar, 2025 1 commit
  21. 07 Mar, 2025 1 commit
  22. 03 Mar, 2025 1 commit
  23. 28 Feb, 2025 1 commit
  24. 27 Feb, 2025 1 commit
  25. 24 Feb, 2025 1 commit
  26. 20 Feb, 2025 1 commit
  27. 19 Feb, 2025 1 commit
  28. 18 Feb, 2025 1 commit
    • Michael Yang's avatar
      build: remove backend build for sapphirerapids · 5f8c0318
      Michael Yang authored
      sapphire rapids has amx support but it ends up having a negative
      performance impact.
      
      emerald rapids also has amx support with a positive performance impact
      however there's no reasonable way in ggml to differentiate between the
      two. the impact is small (~6%) so disable amx entirely for simplicity
      5f8c0318
  29. 14 Feb, 2025 1 commit
  30. 11 Feb, 2025 1 commit
  31. 10 Feb, 2025 1 commit
  32. 05 Feb, 2025 1 commit
  33. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  34. 08 Jan, 2025 1 commit
  35. 17 Dec, 2024 1 commit
    • Jesse Gross's avatar
      llama: Ensure KV cache is fully defragmented. · 08a832b4
      Jesse Gross authored
      Sometimes the KV cache requires defragmentation even without
      triggering the threshold heuristic. In this case, decoding
      will not being able to find a KV cache slot. This is particularly
      difficult for the caller to handle if it happens in between
      ubatches. To avoid this, we should immediately trigger a defrag.
      
      In addition, a heavily fragmented cache can require more than
      max_moves to defragment. Currently, we stop when we hit the limit
      but this can leave a cache that still does not have adequate space
      even after defragmentation is triggered. Instead, we should do
      multiple batches of processing until everything is complete.
      
      Fixes #7949
      08a832b4
  36. 14 Dec, 2024 1 commit