Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
72dfa96a
Unverified
Commit
72dfa96a
authored
Sep 14, 2025
by
fzyzcjy
Committed by
GitHub
Sep 13, 2025
Browse files
Fix cutlass moe accuracy drop caused by attention UB from DP padding mode (#10414)
parent
05b01ef4
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
9 additions
and
2 deletions
+9
-2
python/sglang/srt/layers/dp_attention.py
python/sglang/srt/layers/dp_attention.py
+6
-1
python/sglang/srt/model_executor/forward_batch_info.py
python/sglang/srt/model_executor/forward_batch_info.py
+3
-1
No files found.
python/sglang/srt/layers/dp_attention.py
View file @
72dfa96a
...
@@ -51,7 +51,12 @@ class DpPaddingMode(IntEnum):
...
@@ -51,7 +51,12 @@ class DpPaddingMode(IntEnum):
return
self
==
DpPaddingMode
.
SUM_LEN
return
self
==
DpPaddingMode
.
SUM_LEN
@
classmethod
@
classmethod
def
get_dp_padding_mode
(
cls
,
global_num_tokens
:
List
[
int
])
->
DpPaddingMode
:
def
get_dp_padding_mode
(
cls
,
is_extend_in_batch
,
global_num_tokens
:
List
[
int
]
)
->
DpPaddingMode
:
if
is_extend_in_batch
:
return
DpPaddingMode
.
SUM_LEN
# we choose the mode that minimizes the communication cost
# we choose the mode that minimizes the communication cost
max_len
=
max
(
global_num_tokens
)
max_len
=
max
(
global_num_tokens
)
sum_len
=
sum
(
global_num_tokens
)
sum_len
=
sum
(
global_num_tokens
)
...
...
python/sglang/srt/model_executor/forward_batch_info.py
View file @
72dfa96a
...
@@ -686,7 +686,9 @@ class ForwardBatch:
...
@@ -686,7 +686,9 @@ class ForwardBatch:
(
global_num_tokens
[
i
]
-
1
)
//
attn_tp_size
+
1
(
global_num_tokens
[
i
]
-
1
)
//
attn_tp_size
+
1
)
*
attn_tp_size
)
*
attn_tp_size
dp_padding_mode
=
DpPaddingMode
.
get_dp_padding_mode
(
global_num_tokens
)
dp_padding_mode
=
DpPaddingMode
.
get_dp_padding_mode
(
self
.
is_extend_in_batch
,
global_num_tokens
)
self
.
dp_padding_mode
=
dp_padding_mode
self
.
dp_padding_mode
=
dp_padding_mode
if
dp_padding_mode
.
is_max_len
():
if
dp_padding_mode
.
is_max_len
():
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment