Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
AutoAWQ
Commits
720a1fce
Commit
720a1fce
authored
Sep 13, 2023
by
Casper Hansen
Browse files
Add notes to example
parent
affd1906
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
0 deletions
+2
-0
examples/basic_quant.py
examples/basic_quant.py
+2
-0
No files found.
examples/basic_quant.py
View file @
720a1fce
...
...
@@ -6,6 +6,7 @@ quant_path = 'vicuna-7b-v1.5-awq'
quant_config
=
{
"zero_point"
:
True
,
"q_group_size"
:
128
,
"w_bit"
:
4
,
"version"
:
"GEMM"
}
# Load model
# NOTE: pass safetensors=True to load safetensors
model
=
AutoAWQForCausalLM
.
from_pretrained
(
model_path
)
tokenizer
=
AutoTokenizer
.
from_pretrained
(
model_path
,
trust_remote_code
=
True
)
...
...
@@ -13,6 +14,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model
.
quantize
(
tokenizer
,
quant_config
=
quant_config
)
# Save quantized model
# NOTE: pass safetensors=True to save quantized model weights as safetensors
model
.
save_quantized
(
quant_path
)
tokenizer
.
save_pretrained
(
quant_path
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment