Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
f0395cf5
Unverified
Commit
f0395cf5
authored
May 17, 2022
by
Kyungmin Lee
Committed by
GitHub
May 16, 2022
Browse files
Fix test_model_parallelization (#17249)
* Fix test_model_parallelization * Modify
parent
e705e126
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
tests/test_modeling_common.py
tests/test_modeling_common.py
+1
-1
No files found.
tests/test_modeling_common.py
View file @
f0395cf5
...
@@ -2065,7 +2065,7 @@ class ModelTesterMixin:
...
@@ -2065,7 +2065,7 @@ class ModelTesterMixin:
memory_after_parallelization
=
get_current_gpu_memory_use
()
memory_after_parallelization
=
get_current_gpu_memory_use
()
# Assert that the memory use on all devices is higher than it was when loaded only on CPU
# Assert that the memory use on all devices is higher than it was when loaded only on CPU
for
n
in
range
(
torch
.
cuda
.
device_count
(
)):
for
n
in
range
(
len
(
model
.
device_map
.
keys
()
)):
self
.
assertGreater
(
memory_after_parallelization
[
n
],
memory_at_start
[
n
])
self
.
assertGreater
(
memory_after_parallelization
[
n
],
memory_at_start
[
n
])
# Assert that the memory use of device 0 is lower than it was when the entire model was loaded on it
# Assert that the memory use of device 0 is lower than it was when the entire model was loaded on it
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment