1. 03 Apr, 2025 2 commits
  2. 27 Mar, 2025 2 commits
    • Jesse Gross's avatar
      ml: Remove Output from Context interface · 01aa7887
      Jesse Gross authored
      Model implementations should use Input for all of their tensors
      supplied to the model. This includes tensors that relate to the
      outputs, which is confusing since there is also an Output funciton.
      
      Since Output is only used internally in GGML and not used by any
      model implementations, we can remove it from the interface to
      reduce confusion.
      01aa7887
    • saman-amd's avatar
      Add gfx1200 & gfx1201 support on linux (#9878) · ead27aa9
      saman-amd authored
      ead27aa9
  3. 21 Mar, 2025 2 commits
  4. 18 Mar, 2025 1 commit
  5. 17 Mar, 2025 2 commits
  6. 13 Mar, 2025 1 commit
  7. 12 Mar, 2025 1 commit
  8. 11 Mar, 2025 8 commits
  9. 10 Mar, 2025 1 commit
  10. 08 Mar, 2025 2 commits
  11. 07 Mar, 2025 13 commits
  12. 04 Mar, 2025 1 commit
    • Michael Yang's avatar
      ml/backend/ggml: consolidate system info logging · 05a01fde
      Michael Yang authored
      - output backend system info when initializing the backend. this ensures
        this information is always present without needing to be called
        explicitly
      - convert to structured logging
      - enumerate devices rather than backends since devices are ordered
      - track device indices grouped by device name
      05a01fde
  13. 03 Mar, 2025 1 commit
  14. 02 Mar, 2025 3 commits
    • Jesse Gross's avatar
      ml: Enable support for flash attention · 21aa666a
      Jesse Gross authored
      The GGML flash attention kernel has specific requirements for
      padding and permutation. This adds support to the KV cache
      for conforming to these requirements so that flash attention
      can be enabled.
      
      Flash attention can be used in the same situations as the llama
      engine and is enabled by the user in the same way.
      21aa666a
    • Jesse Gross's avatar
      ml: Empty tensor constructor for tensors · ee141cc8
      Jesse Gross authored
      In cases where we allocate a tensor and then fully overwrite it with
      copied data, it is wasteful to first zero out the memory.
      ee141cc8
    • Jesse Gross's avatar
      ggml-backend: Store parent backend as part of tensor · 55e5776c
      Jesse Gross authored
      It can be important for a tensor to know what backend it came from -
      for example, to know if flash attention is enabled.
      55e5776c