1. 02 Oct, 2025 1 commit
    • Daniel Hiltgen's avatar
      Update GGML to b6646 (#12245) · c68f367e
      Daniel Hiltgen authored
      Notable EOLs with this change:
      - MacOS v12 and v13 are no longer supported (v14+ required)
      - AMD gfx900 and gfx906 are no longer supported
      c68f367e
  2. 14 Aug, 2025 1 commit
    • Michael Yang's avatar
      update vendored llama.cpp and ggml (#11823) · 1a19df1f
      Michael Yang authored
      * TEMPORARY: Update the llama.cpp upstream to my fork's Granite Four branch
      
      This will be redone once my branch is merged upstream in llama.cpp
      
      * feat: Update all patches
      
      There are a number that are no longer needed at all:
      
      - 0003-embeddings: Embeddings entirely overhauled on master
      - 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
          overhauled on master
      - 0019-metal-add-mean-kernel-14267: Merged upstream
      - 0020-CUDA-add-mean-operation-14313: Merged upstream
      
      * feat: Sync llama.cpp and ggml
      
      * fix: Update rsync-filter for all moved/new/removed files
      
      * fix: Add files missing from sync
      
      * fix: Update ggml rsync-filter for new ggml-cpu/arch subdirs
      
      * fix: Add ggml files missing from sync
      
      * fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files
      
      * fix: Remove mtmd main cpp files
      
      * fix: Add missing include in sampling_ext.cpp
      
      * fix: Update llama.go to use mtmd instead of clip/llava
      
      * fix: Add patch for mtmd_input_text
      
      *...
      1a19df1f
  3. 26 Jun, 2025 1 commit
  4. 14 May, 2025 1 commit