Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
6a934d4f
You need to sign in or sign up before continuing.
Commit
6a934d4f
authored
Sep 13, 2023
by
Ruslan Svirschevski
Browse files
reorder state_dict
parent
1d541b50
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
bitsandbytes/nn/modules.py
bitsandbytes/nn/modules.py
+2
-1
No files found.
bitsandbytes/nn/modules.py
View file @
6a934d4f
...
...
@@ -229,6 +229,8 @@ class Linear4bit(nn.Linear):
besides weight and bias,
fill state_dict with components of quant_state
"""
super
().
_save_to_state_dict
(
destination
,
prefix
,
keep_vars
)
# saving weight and bias
if
getattr
(
self
.
weight
,
"quant_state"
,
None
)
is
not
None
:
quant_state_dict
=
self
.
weight
.
quant_state
.
as_dict
()
tensor_keys
=
[
k
for
k
,
v
in
quant_state_dict
.
items
()
if
isinstance
(
v
,
torch
.
Tensor
)]
...
...
@@ -236,7 +238,6 @@ class Linear4bit(nn.Linear):
destination
[
prefix
+
"weight."
+
k
]
=
quant_state_dict
.
pop
(
k
)
if
keep_vars
else
quant_state_dict
.
pop
(
k
).
detach
()
destination
[
prefix
+
"weight."
+
"quant_state_dict"
]
=
quant_state_dict
destination
[
prefix
+
"weight."
+
"quantization_method"
]
=
"bitsandbytes."
+
quant_state_dict
[
"quant_type"
]
super
().
_save_to_state_dict
(
destination
,
prefix
,
keep_vars
)
# saving weight and bias
def
forward
(
self
,
x
:
torch
.
Tensor
):
# weights are cast automatically as Int8Params, but the bias has to be cast manually
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment