Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
e0c3cee1
Unverified
Commit
e0c3cee1
authored
May 10, 2024
by
mobicham
Committed by
GitHub
May 10, 2024
Browse files
hqq - fix weight check in check_quantized_param (#30748)
* hqq - fix weight check in check_quantized_param * ruff format
parent
8ce4fefc
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/quantizers/quantizer_hqq.py
src/transformers/quantizers/quantizer_hqq.py
+1
-1
No files found.
src/transformers/quantizers/quantizer_hqq.py
View file @
e0c3cee1
...
@@ -101,7 +101,7 @@ class HqqHfQuantizer(HfQuantizer):
...
@@ -101,7 +101,7 @@ class HqqHfQuantizer(HfQuantizer):
)
->
bool
:
)
->
bool
:
module
,
tensor_name
=
get_module_from_name
(
model
,
param_name
)
module
,
tensor_name
=
get_module_from_name
(
model
,
param_name
)
return
isinstance
(
module
,
torch
.
nn
.
Linear
)
return
isinstance
(
module
,
torch
.
nn
.
Linear
)
and
(
tensor_name
==
"weight"
)
def
create_quantized_param
(
def
create_quantized_param
(
self
,
self
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment