Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
e35e2c66
Commit
e35e2c66
authored
Sep 18, 2022
by
justheuristic
Browse files
cast properly
parent
577275bd
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
2 deletions
+4
-2
bitsandbytes/autograd/_functions.py
bitsandbytes/autograd/_functions.py
+1
-1
tests/test_autograd.py
tests/test_autograd.py
+3
-1
No files found.
bitsandbytes/autograd/_functions.py
View file @
e35e2c66
...
...
@@ -231,7 +231,7 @@ class MatMul8bitLt(torch.autograd.Function):
# Cast A to fp16
if
A
.
dtype
!=
torch
.
float16
:
warnings
.
warn
(
f
"MatMul8bitLt: input
matrix
will be cast from
{
A
.
dtype
}
to float16"
)
warnings
.
warn
(
f
"MatMul8bitLt: input
s
will be cast from
{
A
.
dtype
}
to float16
during quantization
"
)
# 1. Quantize A
if
len
(
A
.
shape
)
==
3
:
...
...
tests/test_autograd.py
View file @
e35e2c66
...
...
@@ -372,8 +372,10 @@ def test_matmullt(
n
=
out_bnb
.
numel
()
err
=
torch
.
abs
(
out_bnb
-
out_torch
).
mean
().
item
()
# print(f'abs error {err:.4f}')
out_error_rate
=
0.0175
if
dtype
==
torch
.
float16
else
0.02
idx
=
torch
.
isclose
(
out_bnb
,
out_torch
,
atol
=
0.01
,
rtol
=
0.1
)
assert
(
idx
==
0
).
sum
().
item
()
<=
n
*
0.0175
assert
(
idx
==
0
).
sum
().
item
()
<=
n
*
out_error_rate
idx
=
torch
.
isclose
(
out_bnb
,
out_torch
,
atol
=
0.035
,
rtol
=
0.2
)
assert
(
idx
==
0
).
sum
().
item
()
<=
n
*
0.001
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment