Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
fairscale
Commits
8fb39b2a
Unverified
Commit
8fb39b2a
authored
Apr 05, 2024
by
Amy Yang
Committed by
GitHub
Apr 05, 2024
Browse files
add get_cp_ranks to model_parallel initialize (#1176)
Co-authored-by:
amyyang
<
amyyang@meta.com
>
parent
0af41aee
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
0 deletions
+6
-0
fairscale/nn/model_parallel/initialize.py
fairscale/nn/model_parallel/initialize.py
+6
-0
No files found.
fairscale/nn/model_parallel/initialize.py
View file @
8fb39b2a
...
@@ -159,6 +159,12 @@ def get_context_parallel_group() -> torch.distributed.ProcessGroup:
...
@@ -159,6 +159,12 @@ def get_context_parallel_group() -> torch.distributed.ProcessGroup:
return
_CONTEXT_PARALLEL_GROUP
return
_CONTEXT_PARALLEL_GROUP
def
get_context_parallel_ranks
()
->
List
[
int
]:
"""Return context parallel ranks for the context parallel group."""
assert
_CONTEXT_PARALLEL_GROUP_RANKS
is
not
None
,
"context parallel group is not initialized"
return
_CONTEXT_PARALLEL_GROUP_RANKS
def
get_context_parallel_world_size
()
->
int
:
def
get_context_parallel_world_size
()
->
int
:
"""Return world size for the context parallel group."""
"""Return world size for the context parallel group."""
return
torch
.
distributed
.
get_world_size
(
group
=
get_context_parallel_group
())
return
torch
.
distributed
.
get_world_size
(
group
=
get_context_parallel_group
())
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment