1. 08 Apr, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Check for OOM and return as Go errors · a807985e
      Jesse Gross authored
      If there is a CUDA OOM, we currently don't check the return value
      and will evetually segfault. This checks for the problem and generates
      a Go error. At the moment, this will still result in a panic but having
      the error is the first step to being able to handle it more gracefully.
      a807985e
  2. 05 Apr, 2025 1 commit
  3. 03 Apr, 2025 2 commits
  4. 27 Mar, 2025 2 commits
    • Jesse Gross's avatar
      ml: Remove Output from Context interface · 01aa7887
      Jesse Gross authored
      Model implementations should use Input for all of their tensors
      supplied to the model. This includes tensors that relate to the
      outputs, which is confusing since there is also an Output funciton.
      
      Since Output is only used internally in GGML and not used by any
      model implementations, we can remove it from the interface to
      reduce confusion.
      01aa7887
    • saman-amd's avatar
      Add gfx1200 & gfx1201 support on linux (#9878) · ead27aa9
      saman-amd authored
      ead27aa9
  5. 21 Mar, 2025 2 commits
  6. 18 Mar, 2025 1 commit
  7. 17 Mar, 2025 2 commits
  8. 13 Mar, 2025 1 commit
  9. 12 Mar, 2025 1 commit
  10. 11 Mar, 2025 8 commits
  11. 10 Mar, 2025 1 commit
  12. 08 Mar, 2025 2 commits
  13. 07 Mar, 2025 13 commits
  14. 04 Mar, 2025 1 commit
    • Michael Yang's avatar
      ml/backend/ggml: consolidate system info logging · 05a01fde
      Michael Yang authored
      - output backend system info when initializing the backend. this ensures
        this information is always present without needing to be called
        explicitly
      - convert to structured logging
      - enumerate devices rather than backends since devices are ordered
      - track device indices grouped by device name
      05a01fde
  15. 03 Mar, 2025 1 commit
  16. 02 Mar, 2025 1 commit
    • Jesse Gross's avatar
      ml: Enable support for flash attention · 21aa666a
      Jesse Gross authored
      The GGML flash attention kernel has specific requirements for
      padding and permutation. This adds support to the KV cache
      for conforming to these requirements so that flash attention
      can be enabled.
      
      Flash attention can be used in the same situations as the llama
      engine and is enabled by the user in the same way.
      21aa666a