Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
39c3b1d9
Unverified
Commit
39c3b1d9
authored
Aug 17, 2020
by
Stas Bekman
Committed by
GitHub
Aug 17, 2020
Browse files
[sched] polynomial_decay_schedule use default power=1.0 (#6473)
parent
9dbe4094
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
1 deletion
+5
-1
src/transformers/optimization.py
src/transformers/optimization.py
+5
-1
No files found.
src/transformers/optimization.py
View file @
39c3b1d9
...
...
@@ -166,7 +166,7 @@ def get_cosine_with_hard_restarts_schedule_with_warmup(
def
get_polynomial_decay_schedule_with_warmup
(
optimizer
,
num_warmup_steps
,
num_training_steps
,
lr_end
=
1e-7
,
power
=
2
.0
,
last_epoch
=-
1
optimizer
,
num_warmup_steps
,
num_training_steps
,
lr_end
=
1e-7
,
power
=
1
.0
,
last_epoch
=-
1
):
"""
Create a schedule with a learning rate that decreases as a polynomial decay
...
...
@@ -188,6 +188,10 @@ def get_polynomial_decay_schedule_with_warmup(
last_epoch (:obj:`int`, `optional`, defaults to -1):
The index of the last epoch when resuming training.
Note: `power` defaults to 1.0 as in the fairseq implementation, which in turn is
based on the original BERT implementation at
https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37
Return:
:obj:`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment