Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
96d19aae
"tests/experimental/vscode:/vscode.git/clone" did not exist on "7d7edf6d37576fb6eda65db6db43fda54a7f06ba"
Commit
96d19aae
authored
Feb 07, 2022
by
Jared Casper
Browse files
Don't require an even divide of layers in target model.
parent
4147bec2
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
6 deletions
+1
-6
tools/checkpoint_saver_megatron.py
tools/checkpoint_saver_megatron.py
+1
-6
No files found.
tools/checkpoint_saver_megatron.py
View file @
96d19aae
...
@@ -148,11 +148,6 @@ def save_checkpoint(queue, args):
...
@@ -148,11 +148,6 @@ def save_checkpoint(queue, args):
# Transformer layers
# Transformer layers
#-------------------
#-------------------
if
md
.
num_layers
%
args
.
target_pipeline_parallel_size
!=
0
:
print
(
"Source number of layers is not divisible by target pipeline parallel size"
)
exit
(
1
)
layers_per_rank
=
md
.
num_layers
//
args
.
target_pipeline_parallel_size
assert
layers_per_rank
==
len
(
models
[
0
].
language_model
.
encoder
.
layers
)
for
pp_rank
in
range
(
args
.
target_pipeline_parallel_size
):
for
pp_rank
in
range
(
args
.
target_pipeline_parallel_size
):
# For later pipeline parallel ranks, make the new models
# For later pipeline parallel ranks, make the new models
if
pp_rank
>
0
:
if
pp_rank
>
0
:
...
@@ -160,7 +155,7 @@ def save_checkpoint(queue, args):
...
@@ -160,7 +155,7 @@ def save_checkpoint(queue, args):
post_process
=
pp_rank
==
args
.
target_pipeline_parallel_size
-
1
post_process
=
pp_rank
==
args
.
target_pipeline_parallel_size
-
1
models
=
get_models
(
args
.
target_tensor_parallel_size
,
md
.
params_dtype
,
False
,
post_process
)
models
=
get_models
(
args
.
target_tensor_parallel_size
,
md
.
params_dtype
,
False
,
post_process
)
for
layer
in
range
(
l
ayers_per_rank
):
for
layer
in
range
(
l
en
(
models
[
0
].
language_model
.
encoder
.
layers
)
):
# get full tensors
# get full tensors
input_layernorm_weight
=
queue_get
()
input_layernorm_weight
=
queue_get
()
input_layernorm_bias
=
queue_get
()
input_layernorm_bias
=
queue_get
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment