Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
1cb7bdad
Unverified
Commit
1cb7bdad
authored
Apr 12, 2022
by
Frank Lee
Committed by
GitHub
Apr 12, 2022
Browse files
[util] fixed communication API depth with PyTorch 1.9 (#721)
parent
2412429d
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
colossalai/communication/collective.py
colossalai/communication/collective.py
+2
-2
No files found.
colossalai/communication/collective.py
View file @
1cb7bdad
...
...
@@ -211,7 +211,7 @@ def reduce(tensor: Tensor,
def
scatter_object_list
(
scatter_object_output_list
,
scatter_object_input_list
,
src
=
0
,
group
=
None
):
r
"""Modified from `torch.distributed.scatter_object_list <https://pytorch.org/docs/stable/_modules/torch/distributed/distributed_c10d.html#scatter_object_list>` to fix issues
"""
if
dist
.
_rank_not_in_group
(
group
):
if
dist
.
distributed_c10d
.
_rank_not_in_group
(
group
):
return
if
(
not
isinstance
(
scatter_object_output_list
,
list
)
or
len
(
scatter_object_output_list
)
<
1
):
...
...
@@ -220,7 +220,7 @@ def scatter_object_list(scatter_object_output_list, scatter_object_input_list, s
# set tensor device to cuda if backend is nccl
device
=
torch
.
cuda
.
current_device
()
if
dist
.
get_backend
(
group
)
==
'nccl'
else
torch
.
device
(
"cpu"
)
my_rank
=
dist
.
get_rank
()
# use global rank
my_rank
=
dist
.
get_rank
()
# use global rank
if
my_rank
==
src
:
tensor_list
,
tensor_sizes
=
zip
(
*
[
dist
.
distributed_c10d
.
_object_to_tensor
(
obj
)
for
obj
in
scatter_object_input_list
])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment