Adagrad
classtf_keras.optimizers.Adagrad(
learning_rate=0.001,
initial_accumulator_value=0.1,
epsilon=1e-07,
weight_decay=None,
clipnorm=None,
clipvalue=None,
global_clipnorm=None,
use_ema=False,
ema_momentum=0.99,
ema_overwrite_frequency=None,
jit_compile=True,
name="Adagrad",
**kwargs
)
Optimizer that implements the Adagrad algorithm.
Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.
Arguments
tf.keras.optimizers.schedules.LearningRateSchedule
instance.
Defaults to 0.001. Note that Adagrad
tends to benefit from higher
initial learning rate values compared to other optimizers. To match
the exact form in the original paper, use 1.0.
initial_accumulator_value: Floating point value.
Starting value for the accumulators (per-parameter momentum values).
Must be non-negative.
epsilon: Small floating point value used to maintain numerical
stability.
name: String. The name to use
for momentum accumulator weights created by
the optimizer.use_ema=True
.
This is the momentum to use when computing
the EMA of the model's weights:
new_average = ema_momentum * old_average + (1 - ema_momentum) *
current_variable_value
.use_ema=True
. Every ema_overwrite_frequency
steps of iterations,
we overwrite the model variable by its moving average.
If None, the optimizer
does not overwrite model variables in the middle of training, and you
need to explicitly overwrite the variables at the end of training
by calling optimizer.finalize_variable_values()
(which updates the model
variables in-place). When using the built-in fit()
training loop,
this happens automatically after the last epoch,
and you don't need to do anything.tf.experimental.dtensor.Mesh
instance. When provided,
the optimizer will be run in DTensor mode, e.g. state
tracking variable will be a DVariable, and aggregation/reduction will
happen in the global DTensor context.Reference