- 08 Dec, 2025 1 commit
-
-
Michael Yang authored
change to a flatter directory structure and group the options with the function update models to call rope in one place
-
- 13 Nov, 2025 1 commit
-
-
Michael Yang authored
* use slice/chunks * bert * llama4 * gemma3n * gptoss * mistral3 * qwen3vl * qwen25vl * deepseek2 * remove unused ops
-
- 28 Oct, 2025 1 commit
-
-
Michael Yang authored
-
- 18 Oct, 2025 1 commit
-
-
Daniel Hiltgen authored
Co-authored-by:Michael Yang <git@mxy.ng>
-
- 17 Sep, 2025 1 commit
-
-
Michael Yang authored
* fix(llama): rope scale * spm llama * skip moe models * cleanup
-
- 16 Sep, 2025 1 commit
-
-
Michael Yang authored
* use ggml_*_split activations when possible * forward qkv
-
- 15 Sep, 2025 2 commits
-
-
Michael Yang authored
* fix truncate * s/SentencePieceModel/SentencePiece/ * bert * wordpiece * refactor pooling * more tokenizers * normalize embeddings
-
Michael Yang authored
this cleans up the model interface slightly without too much impact in other areas
-
- 29 Jul, 2025 1 commit
-
-
Oliver Simons authored
* Enable CUDA Graphs for gemma3n. Similar to https://github.com/ggml-org/llama.cpp/pull/14741, though ollama has a slightly different model graph than llama.cpp which requires different workaround checks. * Remove residual check by reshaping differently in gemma3n model This should make the heuristics more robust
-
- 27 Jun, 2025 1 commit
-
-
Michael Yang authored
-
- 26 Jun, 2025 1 commit
-
-
Michael Yang authored
* update patches * cherry pick metal mean kernel * cherry pick cuda mean kernel * gemma3n
-