Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
bacb3825
"vscode:/vscode.git/clone" did not exist on "75b3839a71b79efde600f7e08b63aa4466008c4a"
Unverified
Commit
bacb3825
authored
Oct 29, 2025
by
b8zhong
Committed by
GitHub
Oct 29, 2025
Browse files
fix: llama 4 + trtllm gen + fp8 kv cache incompatibility (#12347)
parent
b53d9e11
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
0 deletions
+7
-0
python/sglang/srt/server_args.py
python/sglang/srt/server_args.py
+7
-0
No files found.
python/sglang/srt/server_args.py
View file @
bacb3825
...
...
@@ -971,6 +971,13 @@ class ServerArgs:
logger
.
warning
(
"Use trtllm_mha as attention backend on sm100 for Llama4 model"
)
if
is_sm100_supported
()
and
self
.
attention_backend
==
"trtllm_mha"
:
# TODO(brayden): remove this once TRTLLM MHA kernel for FP8 w/ tileSizeKv=128 is available.
# This is a Llama 4 specific issue only.
self
.
kv_cache_dtype
=
"bfloat16"
logger
.
warning
(
"Setting kv_cache_dtype to bfloat16 for Llama4 with trtllm_mha backend, due to a missing FlashInfer TRTLLM MHA kernel for FP8 KV Cache"
)
if
is_sm100_supported
()
and
self
.
moe_runner_backend
==
"auto"
:
if
self
.
quantization
in
{
"fp8"
,
"modelopt_fp8"
}:
self
.
moe_runner_backend
=
"flashinfer_trtllm"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment