Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
2cd047e3
Commit
2cd047e3
authored
Sep 18, 2022
by
justheuristic
Browse files
run backward
parent
591f6039
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
11 additions
and
0 deletions
+11
-0
tests/test_modules.py
tests/test_modules.py
+11
-0
No files found.
tests/test_modules.py
View file @
2cd047e3
...
@@ -554,11 +554,22 @@ def test_linear8bitlt_no_fp16_weights(threshold, memory_efficient_backward):
...
@@ -554,11 +554,22 @@ def test_linear8bitlt_no_fp16_weights(threshold, memory_efficient_backward):
assert
mlp
.
fc1
.
state
.
idx
is
not
None
assert
mlp
.
fc1
.
state
.
idx
is
not
None
if
threshold
>
0
:
if
threshold
>
0
:
assert
mlp
.
fc2
.
state
.
idx
is
not
None
assert
mlp
.
fc2
.
state
.
idx
is
not
None
assert
mlp
.
fc1
.
weight
.
dtype
==
torch
.
int8
assert
mlp
.
fc1
.
weight
.
dtype
==
torch
.
int8
assert
mlp
.
fc2
.
weight
.
dtype
==
torch
.
int8
assert
mlp
.
fc2
.
weight
.
dtype
==
torch
.
int8
assert
mlp
.
fc1
.
weight
.
device
.
type
==
"cuda"
assert
mlp
.
fc1
.
weight
.
device
.
type
==
"cuda"
assert
mlp
.
fc2
.
weight
.
device
.
type
==
"cuda"
assert
mlp
.
fc2
.
weight
.
device
.
type
==
"cuda"
if
memory_efficient_backward
:
b1
=
torch
.
randn
(
16
,
8
,
32
,
device
=
"cuda"
,
requires_grad
=
True
,
dtype
=
torch
.
half
)
o1
=
mlp
(
b1
)
assert
o1
.
dtype
==
torch
.
float16
assert
o1
.
requires_grad
grad_proj
=
torch
.
randn_like
(
o1
)
(
o1
*
grad_proj
).
sum
().
backward
()
def
test_linear8bitlt_fp32_bias
():
def
test_linear8bitlt_fp32_bias
():
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment