Commit 0bac688c authored by Myle Ott's avatar Myle Ott Committed by Facebook Github Bot
Browse files

Fix --end-learning-rate in polynomial LR schedule

Summary: Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/699

Differential Revision: D16068551

Pulled By: myleott

fbshipit-source-id: dddd8768b531032af7c4598af9dae3c6c00ff9ac
parent 89e077c3
...@@ -60,7 +60,9 @@ class PolynomialDecaySchedule(FairseqLRScheduler): ...@@ -60,7 +60,9 @@ class PolynomialDecaySchedule(FairseqLRScheduler):
"""Update the learning rate after each update.""" """Update the learning rate after each update."""
if self.args.warmup_updates > 0 and num_updates <= self.args.warmup_updates: if self.args.warmup_updates > 0 and num_updates <= self.args.warmup_updates:
self.warmup_factor = num_updates / float(self.args.warmup_updates) self.warmup_factor = num_updates / float(self.args.warmup_updates)
self.optimizer.set_lr(self.warmup_factor * self.lr) lr = self.warmup_factor * self.lr
elif num_updates >= self.total_num_update:
lr = self.end_learning_rate
else: else:
warmup = self.args.warmup_updates warmup = self.args.warmup_updates
lr_range = self.lr - self.end_learning_rate lr_range = self.lr - self.end_learning_rate
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment