llm: Remove unneeded warning with flash attention enabled
If flash attention is enabled without KV cache quanitization, we will currently always get this warning: level=WARN source=server.go:226 msg="kv cache type not supported by model" type=""
Showing
Please register or sign in to comment