tf.keras.optimizers.Adadelta( learning_rate=0.001, rho=0.95, epsilon=1e-07, weight_decay=None, clipnorm=None, clipvalue=None, global_clipnorm=None, use_ema=False, ema_momentum=0.99, ema_overwrite_frequency=None, jit_compile=True, name="Adadelta", **kwargs )
Optimizer that implements the Adadelta algorithm.
Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks:
Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don't have to set an initial learning rate. In this version, the initial learning rate can be set, as in most other Keras optimizers.
tf.keras.optimizers.schedules.LearningRateScheduleinstance. Defaults to 0.001. Note that
Adadeltatends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0.
Tensoror a floating point value. The decay rate. Defaults to 0.95.
use_ema=True. This is the momentum to use when computing the EMA of the model's weights:
new_average = ema_momentum * old_average + (1 - ema_momentum) * current_variable_value.
ema_overwrite_frequencysteps of iterations, we overwrite the model variable by its moving average. If None, the optimizer does not overwrite model variables in the middle of training, and you need to explicitly overwrite the variables at the end of training by calling
optimizer.finalize_variable_values()(which updates the model variables in-place). When using the built-in
fit()training loop, this happens automatically after the last epoch, and you don't need to do anything.