Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
b01eeb80
Unverified
Commit
b01eeb80
authored
Aug 05, 2025
by
Shu Wang
Committed by
GitHub
Aug 04, 2025
Browse files
[NVIDIA]Fix local_num_experts for EP (#8779)
parent
1ea94d3b
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
2 deletions
+4
-2
python/sglang/srt/layers/moe/fused_moe_triton/layer.py
python/sglang/srt/layers/moe/fused_moe_triton/layer.py
+2
-1
python/sglang/srt/layers/quantization/modelopt_quant.py
python/sglang/srt/layers/quantization/modelopt_quant.py
+2
-1
No files found.
python/sglang/srt/layers/moe/fused_moe_triton/layer.py
View file @
b01eeb80
...
@@ -200,7 +200,8 @@ class FusedMoE(torch.nn.Module):
...
@@ -200,7 +200,8 @@ class FusedMoE(torch.nn.Module):
self
.
quant_config
=
quant_config
self
.
quant_config
=
quant_config
self
.
quant_method
.
create_weights
(
self
.
quant_method
.
create_weights
(
layer
=
self
,
layer
=
self
,
num_experts
=
self
.
num_local_experts
,
num_experts
=
self
.
num_experts
,
num_local_experts
=
self
.
num_local_experts
,
hidden_size
=
hidden_size
,
hidden_size
=
hidden_size
,
# FIXME: figure out which intermediate_size to use
# FIXME: figure out which intermediate_size to use
intermediate_size
=
self
.
intermediate_size_per_partition
,
intermediate_size
=
self
.
intermediate_size_per_partition
,
...
...
python/sglang/srt/layers/quantization/modelopt_quant.py
View file @
b01eeb80
...
@@ -752,6 +752,7 @@ class ModelOptNvFp4FusedMoEMethod(FusedMoEMethodBase):
...
@@ -752,6 +752,7 @@ class ModelOptNvFp4FusedMoEMethod(FusedMoEMethodBase):
self
,
self
,
layer
:
torch
.
nn
.
Module
,
layer
:
torch
.
nn
.
Module
,
num_experts
:
int
,
num_experts
:
int
,
num_local_experts
:
int
,
hidden_size
:
int
,
hidden_size
:
int
,
intermediate_size_per_partition
:
int
,
intermediate_size_per_partition
:
int
,
params_dtype
:
torch
.
dtype
,
params_dtype
:
torch
.
dtype
,
...
@@ -765,7 +766,7 @@ class ModelOptNvFp4FusedMoEMethod(FusedMoEMethodBase):
...
@@ -765,7 +766,7 @@ class ModelOptNvFp4FusedMoEMethod(FusedMoEMethodBase):
# TODO(ch-wan): check if this is needed
# TODO(ch-wan): check if this is needed
layer
.
num_experts
=
num_experts
layer
.
num_experts
=
num_experts
layer
.
num_local_experts
=
num_experts
layer
.
num_local_experts
=
num_
local_
experts
layer
.
intermediate_size_per_partition
=
intermediate_size_per_partition
layer
.
intermediate_size_per_partition
=
intermediate_size_per_partition
layer
.
params_dtype
=
params_dtype
layer
.
params_dtype
=
params_dtype
layer
.
quant_config
=
self
.
quant_config
layer
.
quant_config
=
self
.
quant_config
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment