Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
197a2c89
Commit
197a2c89
authored
Jul 12, 2022
by
Zangwei Zheng
Committed by
Frank Lee
Jul 13, 2022
Browse files
[NFC] polish colossalai/communication/collective.py (#1262)
parent
f1cafcc7
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
9 deletions
+2
-9
colossalai/communication/collective.py
colossalai/communication/collective.py
+2
-9
No files found.
colossalai/communication/collective.py
View file @
197a2c89
...
...
@@ -10,10 +10,7 @@ from colossalai.context import ParallelMode
from
colossalai.core
import
global_context
as
gpc
def
all_gather
(
tensor
:
Tensor
,
dim
:
int
,
parallel_mode
:
ParallelMode
,
async_op
:
bool
=
False
)
->
Tensor
:
def
all_gather
(
tensor
:
Tensor
,
dim
:
int
,
parallel_mode
:
ParallelMode
,
async_op
:
bool
=
False
)
->
Tensor
:
r
"""Gathers all tensors from the parallel group and concatenates them in a
specific dimension.
...
...
@@ -163,11 +160,7 @@ def broadcast(tensor: Tensor, src: int, parallel_mode: ParallelMode, async_op: b
return
out
def
reduce
(
tensor
:
Tensor
,
dst
:
int
,
parallel_mode
:
ParallelMode
,
op
:
ReduceOp
=
ReduceOp
.
SUM
,
async_op
:
bool
=
False
):
def
reduce
(
tensor
:
Tensor
,
dst
:
int
,
parallel_mode
:
ParallelMode
,
op
:
ReduceOp
=
ReduceOp
.
SUM
,
async_op
:
bool
=
False
):
r
"""Reduce tensors across whole parallel group. Only the process with
rank ``dst`` is going to receive the final result.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment