keras.optimizers.Adafactor( learning_rate=0.001, beta_2_decay=-0.8, epsilon_1=1e-30, epsilon_2=0.001, clip_threshold=1.0, relative_step=True, weight_decay=None, clipnorm=None, clipvalue=None, global_clipnorm=None, use_ema=False, ema_momentum=0.99, ema_overwrite_frequency=None, name="adafactor", **kwargs )
Optimizer that implements the Adafactor algorithm.
Adafactor is commonly used in NLP tasks, and has the advantage of taking less memory because it only saves partial information of previous gradients.
The default argument setup is based on the original paper (see reference). When gradients are of dimension > 2, Adafactor optimizer will delete the last 2 dimensions separately in its accumulator variables.
keras.optimizers.schedules.LearningRateScheduleinstance, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to
learning_rateis a constant and
relative_step=True, learning rate will be adjusted based on current iterations. This is a default learning rate decay in Adafactor.
use_ema=True. This is the momentum to use when computing the EMA of the model's weights:
new_average = ema_momentum * old_average + (1 - ema_momentum) * current_variable_value.
ema_overwrite_frequencysteps of iterations, we overwrite the model variable by its moving average. If None, the optimizer does not overwrite model variables in the middle of training, and you need to explicitly overwrite the variables at the end of training by calling
optimizer.finalize_variable_values()(which updates the model variables in-place). When using the built-in
fit()training loop, this happens automatically after the last epoch, and you don't need to do anything.
None. If a float, the scale factor will be multiplied the loss before computing gradients, and the inverse of the scale factor will be multiplied by the gradients before updating variables. Useful for preventing underflow during mixed precision training. Alternately,
keras.optimizers.LossScaleOptimizerwill automatically set a loss scale factor.