Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
71fc7b7f
Unverified
Commit
71fc7b7f
authored
Sep 09, 2025
by
Rain H
Committed by
GitHub
Sep 09, 2025
Browse files
[Fix] KV-cache eviction mismatch across PP ranks in DeepSeek V3/R1 (#10214)
parent
9ab72f98
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
10 additions
and
0 deletions
+10
-0
python/sglang/srt/model_executor/model_runner.py
python/sglang/srt/model_executor/model_runner.py
+10
-0
No files found.
python/sglang/srt/model_executor/model_runner.py
View file @
71fc7b7f
...
...
@@ -1260,6 +1260,16 @@ class ModelRunner:
//
self
.
server_args
.
page_size
*
self
.
server_args
.
page_size
)
# different pp rank may have different num of layers, so we need to reduce the max_total_num_tokens
if
self
.
pp_size
>
1
:
tensor
=
torch
.
tensor
(
self
.
max_total_num_tokens
,
dtype
=
torch
.
int64
)
torch
.
distributed
.
all_reduce
(
tensor
,
op
=
torch
.
distributed
.
ReduceOp
.
MIN
,
group
=
get_world_group
().
cpu_group
,
)
self
.
max_total_num_tokens
=
tensor
.
item
()
# create token size for hybrid cache
if
self
.
is_hybrid
:
self
.
set_num_token_hybrid
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment