Adafactor
classtf_keras.optimizers.Adafactor(
learning_rate=0.001,
beta_2_decay=-0.8,
epsilon_1=1e-30,
epsilon_2=0.001,
clip_threshold=1.0,
relative_step=True,
weight_decay=None,
clipnorm=None,
clipvalue=None,
global_clipnorm=None,
use_ema=False,
ema_momentum=0.99,
ema_overwrite_frequency=None,
jit_compile=True,
name="Adafactor",
**kwargs
)
Optimizer that implements the Adafactor algorithm.
Adafactor is commonly used in NLP tasks, and has the advantage of taking less memory because it only saves partial information of previous gradients.
The default argument setup is based on the original paper (see reference). When gradients are of dimension > 2, Adafactor optimizer will delete the last 2 dimensions separately in its accumulator variables.
Arguments
tf.keras.optimizers.schedules.LearningRateSchedule
instance.
Defaults to 0.001.
beta_2_decay: float, defaults to -0.8. The decay rate of beta_2
.
epsilon_1: float, defaults to 1e-30. A small offset to keep denominator
away from 0.
epsilon_2: float, defaults to 1e-3. A small offset to avoid learning
rate becoming too small by time.
clip_threshold: float, defaults to 1.0. Clipping threshold. This is a
part of Adafactor algorithm, independent from clipnorm
,
clipvalue
and global_clipnorm
.
relative_step: bool, defaults to True. If learning_rate
is a
constant and relative_step=True
, learning rate will be adjusted
based on current iterations. This is a default learning rate decay
in Adafactor.use_ema=True
.
This is the momentum to use when computing
the EMA of the model's weights:
new_average = ema_momentum * old_average + (1 - ema_momentum) *
current_variable_value
.use_ema=True
. Every ema_overwrite_frequency
steps of iterations,
we overwrite the model variable by its moving average.
If None, the optimizer
does not overwrite model variables in the middle of training, and you
need to explicitly overwrite the variables at the end of training
by calling optimizer.finalize_variable_values()
(which updates the model
variables in-place). When using the built-in fit()
training loop,
this happens automatically after the last epoch,
and you don't need to do anything.tf.experimental.dtensor.Mesh
instance. When provided,
the optimizer will be run in DTensor mode, e.g. state
tracking variable will be a DVariable, and aggregation/reduction will
happen in the global DTensor context.Reference