Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
6c31a5fe
Commit
6c31a5fe
authored
Feb 27, 2023
by
Artidoro Pagnoni
Browse files
t5 model fix
parent
9851a10b
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
bitsandbytes/nn/modules.py
bitsandbytes/nn/modules.py
+2
-2
No files found.
bitsandbytes/nn/modules.py
View file @
6c31a5fe
...
@@ -190,10 +190,10 @@ class LinearFP4(nn.Linear):
...
@@ -190,10 +190,10 @@ class LinearFP4(nn.Linear):
if
getattr
(
self
.
weight
,
'quant_state'
,
None
)
is
None
:
if
getattr
(
self
.
weight
,
'quant_state'
,
None
)
is
None
:
print
(
'FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.'
)
print
(
'FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.'
)
inp_dtype
=
x
.
dtype
inp_dtype
=
x
.
dtype
x
=
x
.
to
(
torch
.
float16
)
x
=
x
.
to
(
torch
.
float16
)
out
=
bnb
.
matmul_fp4
(
x
,
self
.
weight
.
t
(),
bias
=
self
.
bias
.
half
(),
quant_state
=
self
.
weight
.
quant_state
)
bias
=
None
if
self
.
bias
is
None
else
self
.
bias
.
half
()
out
=
bnb
.
matmul_fp4
(
x
,
self
.
weight
.
t
(),
bias
=
bias
,
quant_state
=
self
.
weight
.
quant_state
)
out
=
out
.
to
(
inp_dtype
)
out
=
out
.
to
(
inp_dtype
)
return
out
return
out
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment