Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
95dafc64
Commit
95dafc64
authored
Sep 18, 2022
by
justheuristic
Browse files
cast before allclose
parent
37f805bb
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
4 deletions
+3
-4
tests/test_modules.py
tests/test_modules.py
+3
-4
No files found.
tests/test_modules.py
View file @
95dafc64
...
...
@@ -541,8 +541,8 @@ def test_linear8bitlt_no_fp16_weights(threshold, memory_efficient_backward):
mlp
=
MLP8bit
(
32
,
64
,
threshold
=
threshold
,
has_fp16_weights
=
False
,
memory_efficient_backward
=
memory_efficient_backward
)
w1
,
w2
=
mlp
.
fc1
.
weight
.
clone
(),
mlp
.
fc2
.
weight
.
clone
()
mlp
=
mlp
.
cuda
().
half
()
w1
,
w2
=
mlp
.
fc1
.
weight
.
clone
(),
mlp
.
fc2
.
weight
.
clone
()
# note: we grad original weights before quantization,
mlp
=
mlp
.
cuda
().
half
()
# and this line triggers quantization
for
i
in
range
(
100
):
b1
=
torch
.
randn
(
16
,
8
,
32
,
device
=
"cuda"
).
half
()
...
...
@@ -567,8 +567,7 @@ def test_linear8bitlt_no_fp16_weights(threshold, memory_efficient_backward):
mlp
.
zero_grad
()
(
o1
*
grad_proj
).
sum
().
backward
()
assert
False
,
(
w1
,
w2
)
grad_ref
=
grad_proj
.
flatten
(
2
)
@
w2
@
w1
grad_ref
=
grad_proj
.
flatten
(
2
)
@
w2
.
to
(
grad_proj
.
device
)
@
w1
.
to
(
grad_proj
.
device
)
assert
torch
.
allclose
(
b1
.
grad
,
grad_ref
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment