Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
191d836f
Unverified
Commit
191d836f
authored
Jul 12, 2025
by
Peng Zhang
Committed by
GitHub
Jul 11, 2025
Browse files
fix: minor fix for modelopt weight load compatibility (#7953)
parent
86044712
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
1 deletion
+6
-1
python/sglang/srt/layers/moe/fused_moe_triton/layer.py
python/sglang/srt/layers/moe/fused_moe_triton/layer.py
+6
-1
No files found.
python/sglang/srt/layers/moe/fused_moe_triton/layer.py
View file @
191d836f
...
...
@@ -518,6 +518,7 @@ class FusedMoE(torch.nn.Module):
self
.
quant_method
.
enable_flashinfer_moe
=
self
.
enable_flashinfer_moe
assert
self
.
quant_method
is
not
None
self
.
quant_config
=
quant_config
self
.
quant_method
.
create_weights
(
layer
=
self
,
num_experts
=
self
.
local_num_experts
,
...
...
@@ -661,7 +662,11 @@ class FusedMoE(torch.nn.Module):
):
raise
ValueError
(
"expert_data and loaded_weight must be torch.Tensor"
)
if
expert_data
.
dim
()
!=
2
or
loaded_weight
.
dim
()
!=
2
:
if
(
self
.
quant_config
is
not
None
and
"modelopt"
in
self
.
quant_config
.
get_name
()
and
(
expert_data
.
dim
()
!=
2
or
loaded_weight
.
dim
()
!=
2
)
):
raise
ValueError
(
f
"Expected 2D tensors, got expert_data shape
{
expert_data
.
shape
}
and loaded_weight shape
{
loaded_weight
.
shape
}
"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment