1. 22 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
  2. 20 May, 2025 1 commit
  3. 16 May, 2025 1 commit
  4. 14 May, 2025 2 commits
  5. 13 May, 2025 3 commits
  6. 12 May, 2025 1 commit
  7. 10 May, 2025 1 commit
  8. 08 May, 2025 1 commit
  9. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  10. 05 May, 2025 2 commits
  11. 02 May, 2025 2 commits
  12. 25 Apr, 2025 1 commit
  13. 24 Apr, 2025 1 commit
  14. 17 Apr, 2025 1 commit
  15. 16 Apr, 2025 1 commit
  16. 15 Apr, 2025 1 commit
  17. 03 Apr, 2025 1 commit
    • Bruce MacDonald's avatar
      model: support for mistral-small in the ollama runner · 6bd0a983
      Bruce MacDonald authored
      Mistral is a popular research lab making open source models. This updates
      the forward pass of llama architecture models to support both llama models
      and mistral models by accounting for additional metadata present in mistral
      models, and finding the correct dimensions for the output projection.
      6bd0a983
  18. 31 Mar, 2025 1 commit
    • Bruce MacDonald's avatar
      runner: clear cache when shift is not possible (#9433) · 66b25392
      Bruce MacDonald authored
      Clear KV cache when shift operation is not supported by model.
      Added KvCacheCanShift() check to handle models that can't perform cache shifts,
      falling back to full cache clear while preserving logical token history to
      maintain expected behavior when context window fills up.
      66b25392
  19. 27 Mar, 2025 1 commit
  20. 15 Mar, 2025 1 commit
  21. 11 Mar, 2025 1 commit
  22. 10 Mar, 2025 1 commit
  23. 07 Mar, 2025 1 commit
  24. 04 Mar, 2025 1 commit
    • Michael Yang's avatar
      ml/backend/ggml: consolidate system info logging · 05a01fde
      Michael Yang authored
      - output backend system info when initializing the backend. this ensures
        this information is always present without needing to be called
        explicitly
      - convert to structured logging
      - enumerate devices rather than backends since devices are ordered
      - track device indices grouped by device name
      05a01fde
  25. 03 Mar, 2025 1 commit
  26. 28 Feb, 2025 2 commits
  27. 27 Feb, 2025 2 commits
  28. 25 Feb, 2025 1 commit
  29. 24 Feb, 2025 1 commit
  30. 20 Feb, 2025 1 commit
  31. 19 Feb, 2025 1 commit
  32. 18 Feb, 2025 1 commit
    • Michael Yang's avatar
      build: remove backend build for sapphirerapids · 5f8c0318
      Michael Yang authored
      sapphire rapids has amx support but it ends up having a negative
      performance impact.
      
      emerald rapids also has amx support with a positive performance impact
      however there's no reasonable way in ggml to differentiate between the
      two. the impact is small (~6%) so disable amx entirely for simplicity
      5f8c0318
  33. 14 Feb, 2025 1 commit