Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
xuwx1
LightX2V
Commits
5be4fe5a
Unverified
Commit
5be4fe5a
authored
Nov 28, 2025
by
Kane
Committed by
GitHub
Nov 28, 2025
Browse files
fix mlu int8 quant (#531)
1. 修复mlu int8量化
parent
f7665abb
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
1 deletion
+3
-1
lightx2v/common/ops/mm/mm_weight.py
lightx2v/common/ops/mm/mm_weight.py
+3
-1
No files found.
lightx2v/common/ops/mm/mm_weight.py
View file @
5be4fe5a
...
@@ -1204,5 +1204,7 @@ class MMWeightWint8channelAint8channeldynamicMlu(MMWeightQuantTemplate):
...
@@ -1204,5 +1204,7 @@ class MMWeightWint8channelAint8channeldynamicMlu(MMWeightQuantTemplate):
def
apply
(
self
,
input_tensor
):
def
apply
(
self
,
input_tensor
):
dtype
=
input_tensor
.
dtype
dtype
=
input_tensor
.
dtype
input_tensor_quant
,
input_tensor_scale
=
self
.
act_quant_func
(
input_tensor
)
input_tensor_quant
,
input_tensor_scale
=
self
.
act_quant_func
(
input_tensor
)
output_tensor
=
tmo
.
scaled_matmul
(
input_tensor_quant
,
self
.
weight
.
contiguous
(),
input_tensor_scale
,
self
.
weight_scale
.
squeeze
(
-
1
),
output_dtype
=
dtype
,
use_hp_active
=
True
)
output_tensor
=
tmo
.
scaled_matmul
(
input_tensor_quant
,
self
.
weight
.
contiguous
(),
input_tensor_scale
,
self
.
weight_scale
.
squeeze
(
-
1
),
bias
=
self
.
bias
if
self
.
bias
is
not
None
else
None
,
output_dtype
=
dtype
,
use_hp_active
=
True
)
return
output_tensor
return
output_tensor
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment