Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
07452cbe
Unverified
Commit
07452cbe
authored
Jul 14, 2025
by
Chunyuan WU
Committed by
GitHub
Jul 14, 2025
Browse files
[CPU] fix no attribute 'can_fuse_mlp_allreduce' error (#8010)
parent
a562c8a3
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
3 deletions
+5
-3
python/sglang/srt/models/deepseek_v2.py
python/sglang/srt/models/deepseek_v2.py
+5
-3
No files found.
python/sglang/srt/models/deepseek_v2.py
View file @
07452cbe
...
@@ -462,7 +462,7 @@ class DeepseekV2MoE(nn.Module):
...
@@ -462,7 +462,7 @@ class DeepseekV2MoE(nn.Module):
if
hasattr
(
self
,
"shared_experts"
)
and
use_intel_amx_backend
(
if
hasattr
(
self
,
"shared_experts"
)
and
use_intel_amx_backend
(
self
.
shared_experts
.
gate_up_proj
self
.
shared_experts
.
gate_up_proj
):
):
return
self
.
forward_cpu
(
hidden_states
)
return
self
.
forward_cpu
(
hidden_states
,
can_fuse_mlp_allreduce
)
shared_output
=
self
.
_forward_shared_experts
(
hidden_states
)
shared_output
=
self
.
_forward_shared_experts
(
hidden_states
)
# router_logits: (num_tokens, n_experts)
# router_logits: (num_tokens, n_experts)
...
@@ -479,7 +479,9 @@ class DeepseekV2MoE(nn.Module):
...
@@ -479,7 +479,9 @@ class DeepseekV2MoE(nn.Module):
final_hidden_states
=
tensor_model_parallel_all_reduce
(
final_hidden_states
)
final_hidden_states
=
tensor_model_parallel_all_reduce
(
final_hidden_states
)
return
final_hidden_states
return
final_hidden_states
def
forward_cpu
(
self
,
hidden_states
:
torch
.
Tensor
)
->
torch
.
Tensor
:
def
forward_cpu
(
self
,
hidden_states
:
torch
.
Tensor
,
can_fuse_mlp_allreduce
:
bool
=
False
)
->
torch
.
Tensor
:
# router_logits: (num_tokens, n_experts)
# router_logits: (num_tokens, n_experts)
router_logits
=
self
.
gate
(
hidden_states
)
router_logits
=
self
.
gate
(
hidden_states
)
fused_experts_out
=
self
.
experts
(
fused_experts_out
=
self
.
experts
(
...
@@ -528,7 +530,7 @@ class DeepseekV2MoE(nn.Module):
...
@@ -528,7 +530,7 @@ class DeepseekV2MoE(nn.Module):
None
,
# a2_scale
None
,
# a2_scale
True
,
# is_vnni
True
,
# is_vnni
)
)
if
self
.
tp_size
>
1
and
not
self
.
can_fuse_mlp_allreduce
:
if
self
.
tp_size
>
1
and
not
can_fuse_mlp_allreduce
:
final_hidden_states
=
tensor_model_parallel_all_reduce
(
final_hidden_states
)
final_hidden_states
=
tensor_model_parallel_all_reduce
(
final_hidden_states
)
return
final_hidden_states
return
final_hidden_states
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment