Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
2ee289fb
Unverified
Commit
2ee289fb
authored
Dec 03, 2023
by
Titus
Committed by
GitHub
Dec 03, 2023
Browse files
Merge pull request #867 from jph00/patch-2
Avoid double-quantizing when calling `cuda()`
parents
744d36f7
a403c0ed
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
0 deletions
+2
-0
bitsandbytes/nn/modules.py
bitsandbytes/nn/modules.py
+2
-0
No files found.
bitsandbytes/nn/modules.py
View file @
2ee289fb
...
...
@@ -165,6 +165,8 @@ class Params4bit(torch.nn.Parameter):
return
self
def
cuda
(
self
,
device
):
if
self
.
quant_state
is
not
None
:
return
self
w
=
self
.
data
.
contiguous
().
half
().
cuda
(
device
)
w_4bit
,
quant_state
=
bnb
.
functional
.
quantize_4bit
(
w
,
blocksize
=
self
.
blocksize
,
compress_statistics
=
self
.
compress_statistics
,
quant_type
=
self
.
quant_type
)
self
.
data
=
w_4bit
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment