Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
647c976a
Commit
647c976a
authored
Sep 17, 2022
by
justheuristic
Browse files
change order
parent
0de1a449
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
5 deletions
+2
-5
bitsandbytes/autograd/_functions.py
bitsandbytes/autograd/_functions.py
+2
-5
No files found.
bitsandbytes/autograd/_functions.py
View file @
647c976a
...
...
@@ -316,10 +316,10 @@ class MatMul8bitLt(torch.autograd.Function):
if
bias
is
None
or
bias
.
dtype
==
torch
.
float16
:
output
=
F
.
mm_dequant
(
out32
,
Sout32
,
SCA
,
state
.
SCB
,
bias
=
bias
)
delayed_bias
=
None
output
=
output
.
to
(
A_dtype
)
else
:
# apply bias separately
output
=
F
.
mm_dequant
(
out32
,
Sout32
,
SCA
,
state
.
SCB
,
bias
=
None
)
delaye
d_bias
=
bias
output
=
output
.
to
(
A_dtype
).
ad
d_
(
bias
)
# 4. Mixed-precision decomposition matmul
if
coo_tensorA
is
not
None
and
subA
is
not
None
:
...
...
@@ -340,9 +340,6 @@ class MatMul8bitLt(torch.autograd.Function):
ctx
.
tensor_states
=
(
None
,
None
)
ctx
.
save_for_backward
(
None
,
None
)
output
=
output
.
to
(
A_dtype
)
if
delayed_bias
is
not
None
:
output
.
add_
(
delayed_bias
)
clone_func
=
torch
.
clone
if
len
(
output_shape
)
==
3
else
lambda
x
:
x
return
clone_func
(
output
.
view
(
output_shape
))
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment