Unverified Commit 41aa2b4e authored by Lysandre Debut's avatar Lysandre Debut Committed by GitHub
Browse files

Adafactor docs (#6765)

parent 971d1802
...@@ -13,6 +13,11 @@ The ``.optimization`` module provides: ...@@ -13,6 +13,11 @@ The ``.optimization`` module provides:
.. autoclass:: transformers.AdamW .. autoclass:: transformers.AdamW
:members: :members:
``AdaFactor`` (PyTorch)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.Adafactor
``AdamWeightDecay`` (TensorFlow) ``AdamWeightDecay`` (TensorFlow)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
...@@ -328,31 +328,57 @@ class Adafactor(Optimizer): ...@@ -328,31 +328,57 @@ class Adafactor(Optimizer):
*warmup_init* options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and `relative_step=False`. *warmup_init* options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and `relative_step=False`.
Arguments: Arguments:
params (iterable): iterable of parameters to optimize or dicts defining parameter groups params (:obj:`Iterable[torch.nn.parameter.Parameter]`):
lr (float, optional): external learning rate (default: None) Iterable of parameters to optimize or dictionaries defining parameter groups.
eps (tuple[float, float]): regularization constants for square gradient lr (:obj:`float`, `optional`):
and parameter scale respectively (default: (1e-30, 1e-3)) The external learning rate.
clip_threshold (float, default 1.0): threshold of root mean square of final gradient update eps (:obj:`Tuple[float, float]`, `optional`, defaults to (1e-30, 1e-3)):
decay_rate (float, default: -0.8): coefficient used to compute running averages of square Regularization constants for square gradient and parameter scale respectively
beta1 (float): coefficient used for computing running averages of gradient clip_threshold (:obj:`float`, `optional`, defaults 1.0):
weight_decay (float, default=0): weight decay (L2 penalty) Threshold of root mean square of final gradient update
scale_parameter (bool, default: True): if True, learning rate is scaled by root mean square of decay_rate (:obj:`float`, `optional`, defaults to -0.8):
relative_step (bool, default: True): if True, time-dependent learning rate is computed instead of external learning rate Coefficient used to compute running averages of square
warmup_init (bool, default: False): time-dependent learning rate computation depends on whether warm-up initialization is being used beta1 (:obj:`float`, `optional`):
Coefficient used for computing running averages of gradient
weight_decay (:obj:`float`, `optional`, defaults to 0):
Weight decay (L2 penalty)
scale_parameter (:obj:`bool`, `optional`, defaults to :obj:`True`):
If True, learning rate is scaled by root mean square
relative_step (:obj:`bool`, `optional`, defaults to :obj:`True`):
If True, time-dependent learning rate is computed instead of external learning rate
warmup_init (:obj:`bool`, `optional`, defaults to False):
Time-dependent learning rate computation depends on whether warm-up initialization is being used
This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested. This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested.
Recommended T5 finetuning settings: Recommended T5 finetuning settings:
scheduled LR warm-up to fixed LR, disable relative updates, use clip threshold: https://arxiv.org/abs/2004.14546 - Scheduled LR warm-up to fixed LR
Adafactor(model.parameters(), lr=1e-3, relative_step=False, warmup_init=True) - disable relative updates
Alternatively, relative_step with warmup_init can be used. - use clip threshold: https://arxiv.org/abs/2004.14546
Training without LR warmup or clip threshold, is not recommended. Additional optimizer operations like gradient clipping, should not be used alongside Adafactor.
Example::
Adafactor(model.parameters(), lr=1e-3, relative_step=False, warmup_init=True)
- Alternatively, relative_step with warmup_init can be used.
- Training without LR warmup or clip threshold is not recommended. Additional optimizer operations like
gradient clipping should not be used alongside Adafactor.
Usage:: Usage::
# replace AdamW with Adafactor # replace AdamW with Adafactor
optimizer = Adafactor(model.parameters(), lr=1e-3, eps=(1e-30, 1e-3), clip_threshold=1.0, optimizer = Adafactor(
decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, model.parameters(),
scale_parameter=False, warmup_init=False,) lr=1e-3,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False
)
""" """
def __init__( def __init__(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment