Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
7e810e41
Commit
7e810e41
authored
Jan 24, 2022
by
Vijay Korthikanti
Browse files
minor fixes
parent
d8c85650
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
3 deletions
+3
-3
megatron/learning_rates.py
megatron/learning_rates.py
+2
-2
megatron/model/transformer.py
megatron/model/transformer.py
+1
-1
No files found.
megatron/learning_rates.py
View file @
7e810e41
...
...
@@ -128,8 +128,8 @@ class AnnealingLR(object):
new_lr
=
self
.
get_lr
()
new_wd
=
self
.
get_wd
()
for
group
in
self
.
optimizer
.
param_groups
:
group
[
'lr'
]
=
new_lr
*
group
[
'lr_mult'
]
group
[
'weight_decay'
]
=
new_wd
*
group
[
'wd_mult'
]
group
[
'lr'
]
=
new_lr
*
group
.
get
(
'lr_mult'
,
1.0
)
group
[
'weight_decay'
]
=
new_wd
*
group
.
get
(
'wd_mult'
,
1.0
)
def
state_dict
(
self
):
...
...
megatron/model/transformer.py
View file @
7e810e41
...
...
@@ -608,7 +608,7 @@ class ParallelTransformer(MegatronModule):
self
.
num_layers
=
mpu
.
get_num_layers
(
args
,
args
.
model_type
==
ModelType
.
encoder_and_decoder
)
self
.
dpr
=
[
x
.
item
()
for
x
in
torch
.
linspace
(
0
,
self
.
drop_path_rate
,
self
.
num_layers
)]
self
.
dpr
=
[
x
.
item
()
for
x
in
torch
.
linspace
(
0
,
self
.
drop_path_rate
,
args
.
num_layers
)]
# Transformer layers.
def
build_layer
(
layer_number
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment