Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
fc4a135e
Commit
fc4a135e
authored
Sep 17, 2022
by
justheuristic
Browse files
clearer assertions
parent
e29c5f5c
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
bitsandbytes/autograd/_functions.py
bitsandbytes/autograd/_functions.py
+2
-2
No files found.
bitsandbytes/autograd/_functions.py
View file @
fc4a135e
...
@@ -232,7 +232,7 @@ class MatMul8bitLt(torch.autograd.Function):
...
@@ -232,7 +232,7 @@ class MatMul8bitLt(torch.autograd.Function):
# Cast A to fp16
# Cast A to fp16
A_dtype
=
A
.
dtype
A_dtype
=
A
.
dtype
if
A_dtype
!=
torch
.
float16
:
if
A_dtype
!=
torch
.
float16
:
warnings
.
warn
(
f
"MatMul8bitLt:
temporarily casting input matrix
from
{
A_dtype
}
to float16"
)
warnings
.
warn
(
f
"MatMul8bitLt:
input matrix will be converted
from
{
A_dtype
}
to float16"
)
A
=
A
.
to
(
torch
.
float16
)
A
=
A
.
to
(
torch
.
float16
)
# 1. Quantize A
# 1. Quantize A
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment