Unverified Commit f0395cf5 authored by Kyungmin Lee's avatar Kyungmin Lee Committed by GitHub
Browse files

Fix test_model_parallelization (#17249)

* Fix test_model_parallelization

* Modify
parent e705e126
...@@ -2065,7 +2065,7 @@ class ModelTesterMixin: ...@@ -2065,7 +2065,7 @@ class ModelTesterMixin:
memory_after_parallelization = get_current_gpu_memory_use() memory_after_parallelization = get_current_gpu_memory_use()
# Assert that the memory use on all devices is higher than it was when loaded only on CPU # Assert that the memory use on all devices is higher than it was when loaded only on CPU
for n in range(torch.cuda.device_count()): for n in range(len(model.device_map.keys())):
self.assertGreater(memory_after_parallelization[n], memory_at_start[n]) self.assertGreater(memory_after_parallelization[n], memory_at_start[n])
# Assert that the memory use of device 0 is lower than it was when the entire model was loaded on it # Assert that the memory use of device 0 is lower than it was when the entire model was loaded on it
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment