Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
63469c0f
Commit
63469c0f
authored
Mar 14, 2022
by
ver217
Browse files
polish code
parent
54fd37f0
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
0 deletions
+3
-0
colossalai/zero/shard_utils/bucket_tensor_shard_strategy.py
colossalai/zero/shard_utils/bucket_tensor_shard_strategy.py
+3
-0
No files found.
colossalai/zero/shard_utils/bucket_tensor_shard_strategy.py
View file @
63469c0f
...
...
@@ -23,6 +23,9 @@ class BucketTensorShardStrategy(TensorShardStrategy):
for
i
in
range
(
self
.
world_size
):
if
i
==
self
.
local_rank
:
buffer_list
.
append
(
flatten
([
t
.
payload
for
t
in
tensor_list
]).
cuda
(
get_current_device
()))
# Release payload here, to decrease peak memory usage
for
t
in
tensor_list
:
t
.
reset_payload
(
None
)
else
:
buffer_list
.
append
(
torch
.
zeros
(
buffer_size
,
dtype
=
dtype
,
device
=
get_current_device
()))
dist
.
all_gather
(
buffer_list
,
buffer_list
[
self
.
local_rank
],
group
=
self
.
process_group
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment