tf.keras.optimizers.Adagrad( learning_rate=0.001, initial_accumulator_value=0.1, epsilon=1e-07, name="Adagrad", **kwargs )
Optimizer that implements the Adagrad algorithm.
Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.
tf.keras.optimizers.schedules.LearningRateScheduleinstance. Defaults to 0.001. Note that
Adagradtends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0.
clipvalue(float) is set, the gradient of each weight is clipped to be no higher than this value. If
clipnorm(float) is set, the gradient of each weight is individually clipped so that its norm is no higher than this value. If
global_clipnorm(float) is set the gradient of all weights is clipped so that their global norm is no higher than this value..