Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
f3e57f6f
Commit
f3e57f6f
authored
Sep 06, 2021
by
Sangkug Lym
Browse files
remove increasing nccl stream for overlapping allreduce and gemm
parent
6e1bde1e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
16 deletions
+5
-16
megatron/initialize.py
megatron/initialize.py
+5
-16
No files found.
megatron/initialize.py
View file @
f3e57f6f
...
...
@@ -176,22 +176,11 @@ def _initialize_distributed():
else
:
args
.
local_rank
=
device
torch
.
cuda
.
set_device
(
device
)
# Increase cuda stream priority of NCCL ops when overlapping with other ops
if
(
not
args
.
no_async_tensor_model_parallel_allreduce
and
args
.
tensor_model_parallel_size
>
1
):
from
torch._C._distributed_c10d
import
ProcessGroupNCCL
pg_options
=
ProcessGroupNCCL
.
Options
()
pg_options
.
is_high_priority_stream
=
True
pg_options
.
_timeout
=
timedelta
(
days
=
7
)
else
:
pg_options
=
None
# Call the init process
torch
.
distributed
.
init_process_group
(
backend
=
args
.
distributed_backend
,
world_size
=
args
.
world_size
,
rank
=
args
.
rank
,
timeout
=
timedelta
(
days
=
7
),
pg_options
=
pg_options
)
# Call the init process
torch
.
distributed
.
init_process_group
(
backend
=
args
.
distributed_backend
,
world_size
=
args
.
world_size
,
rank
=
args
.
rank
,
timeout
=
timedelta
(
days
=
7
))
# Set the tensor model-parallel, pipeline model-parallel, and
# data-parallel communicators.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment