Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
AutoAWQ
Commits
6534f5e6
Commit
6534f5e6
authored
Sep 08, 2023
by
Casper Hansen
Browse files
Fix variables
parent
5db43a7f
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
awq/quantize/auto_clip.py
awq/quantize/auto_clip.py
+1
-1
awq/quantize/auto_scale.py
awq/quantize/auto_scale.py
+1
-1
No files found.
awq/quantize/auto_clip.py
View file @
6534f5e6
...
...
@@ -43,7 +43,7 @@ def auto_clip_layer(w,
max_val
=
org_max_val
*
(
1
-
i_s
/
n_grid
)
min_val
=
-
max_val
cur_w
=
torch
.
clamp
(
w
,
min_val
,
max_val
)
q_w
=
pseudo_quantize_tensor
(
cur_w
,
**
quant_config
)
q_w
=
pseudo_quantize_tensor
(
cur_w
,
w_bit
=
quant_config
[
"w_bit"
],
q_group_size
=
quant_config
[
"q_group_size"
]
)
cur_out
=
(
input_feat
*
q_w
).
sum
(
dim
=-
1
)
# co, 1, n_group, 1
...
...
awq/quantize/auto_scale.py
View file @
6534f5e6
...
...
@@ -98,7 +98,7 @@ def auto_scale_block(awq_model,
from
.quantizer
import
pseudo_quantize_tensor
# firstly, get the weight quantize function
if
quant_config
[
'w_bit'
]
is
not
None
:
def
w_quantize_func
(
p
):
return
pseudo_quantize_tensor
(
p
,
**
quant_config
).
detach
()
def
w_quantize_func
(
p
):
return
pseudo_quantize_tensor
(
p
,
w_bit
=
quant_config
[
"w_bit"
],
q_group_size
=
quant_config
[
"q_group_size"
]
).
detach
()
else
:
def
w_quantize_func
(
p
):
return
p
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment