Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
87f88af4
Commit
87f88af4
authored
Jul 29, 2024
by
Matthew Douglas
Browse files
Enable loading prequantized weights with bf16/fp16/fp32 quant_storage type for FSDP
parent
2621e1af
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
0 deletions
+5
-0
bitsandbytes/nn/modules.py
bitsandbytes/nn/modules.py
+5
-0
No files found.
bitsandbytes/nn/modules.py
View file @
87f88af4
...
...
@@ -273,6 +273,7 @@ class Params4bit(torch.nn.Parameter):
quantized_stats
:
Dict
[
str
,
Any
],
requires_grad
:
bool
=
False
,
device
=
"cuda"
,
module
:
Optional
[
"Linear4bit"
]
=
None
,
**
kwargs
,
)
->
"Params4bit"
:
self
=
torch
.
Tensor
.
_make_subclass
(
cls
,
data
.
to
(
device
))
...
...
@@ -284,6 +285,10 @@ class Params4bit(torch.nn.Parameter):
self
.
bnb_quantized
=
True
self
.
quant_storage
=
data
.
dtype
self
.
module
=
module
if
self
.
module
is
not
None
:
self
.
module
.
quant_state
=
self
.
quant_state
return
self
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment