Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
c178abda
Unverified
Commit
c178abda
authored
May 10, 2025
by
JieXin Liang
Committed by
GitHub
May 10, 2025
Browse files
[fix] fix determine_n_share_experts_fusion (#6118)
parent
b29a026e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
3 deletions
+5
-3
python/sglang/srt/models/deepseek_v2.py
python/sglang/srt/models/deepseek_v2.py
+5
-3
No files found.
python/sglang/srt/models/deepseek_v2.py
View file @
c178abda
...
@@ -1486,14 +1486,15 @@ class DeepseekV2ForCausalLM(nn.Module):
...
@@ -1486,14 +1486,15 @@ class DeepseekV2ForCausalLM(nn.Module):
if
self
.
n_share_experts_fusion
>
0
:
if
self
.
n_share_experts_fusion
>
0
:
# Only Deepseek V3/R1 can use shared experts fusion optimization now.
# Only Deepseek V3/R1 can use shared experts fusion optimization now.
if
(
if
(
self
.
config
.
architectures
[
0
]
!=
architecture
not
_is_cuda
or
self
.
config
.
architectures
[
0
]
!=
architecture
or
self
.
config
.
n_routed_experts
!=
256
or
self
.
config
.
n_routed_experts
!=
256
):
):
self
.
n_share_experts_fusion
=
0
self
.
n_share_experts_fusion
=
0
global_server_args_dict
[
"n_share_experts_fusion"
]
=
0
global_server_args_dict
[
"n_share_experts_fusion"
]
=
0
log_info_on_rank0
(
log_info_on_rank0
(
logger
,
logger
,
"Only Deepseek V3/R1 can use shared experts fusion optimization. Shared experts fusion optimization is disabled."
,
"Only Deepseek V3/R1
on NV-platform
can use shared experts fusion optimization. Shared experts fusion optimization is disabled."
,
)
)
else
:
else
:
assert
(
assert
(
...
@@ -1501,7 +1502,8 @@ class DeepseekV2ForCausalLM(nn.Module):
...
@@ -1501,7 +1502,8 @@ class DeepseekV2ForCausalLM(nn.Module):
),
f
"Shared experts fusion optimization is enabled in DeepSeek V3/R1, set it to
{
self
.
tp_size
}
can get best optimized performace."
),
f
"Shared experts fusion optimization is enabled in DeepSeek V3/R1, set it to
{
self
.
tp_size
}
can get best optimized performace."
elif
self
.
n_share_experts_fusion
==
0
:
elif
self
.
n_share_experts_fusion
==
0
:
if
(
if
(
torch
.
cuda
.
get_device_capability
(
"cuda"
)
>=
(
9
,
0
)
_is_cuda
and
torch
.
cuda
.
get_device_capability
(
"cuda"
)
>=
(
9
,
0
)
and
self
.
config
.
architectures
[
0
]
==
architecture
and
self
.
config
.
architectures
[
0
]
==
architecture
and
self
.
config
.
n_routed_experts
==
256
and
self
.
config
.
n_routed_experts
==
256
and
(
not
global_server_args_dict
[
"enable_deepep_moe"
])
and
(
not
global_server_args_dict
[
"enable_deepep_moe"
])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment