tf.keras.optimizers.Adamax( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, weight_decay=None, clipnorm=None, clipvalue=None, global_clipnorm=None, use_ema=False, ema_momentum=0.99, ema_overwrite_frequency=None, jit_compile=True, name="Adamax", **kwargs )
Optimizer that implements the Adamax algorithm.
Adamax, a variant of Adam based on the infinity norm, is a first-order gradient-based optimization method. Due to its capability of adjusting the learning rate based on data characteristics, it is suited to learn time-variant process, e.g., speech data with dynamically changed noise conditions. Default parameters follow those provided in the paper (see references below).
m = 0 # Initialize initial 1st moment vector u = 0 # Initialize the exponentially weighted infinity norm t = 0 # Initialize timestep
The update rule for parameter
w with gradient
g is described at the end
of section 7.1 of the paper (see the referenece section):
t += 1 m = beta1 * m + (1 - beta) * g u = max(beta2 * u, abs(g)) current_lr = learning_rate / (1 - beta1 ** t) w = w - current_lr * m / (u + epsilon)
tf.Tensor, floating point value, a schedule that is a
tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to
use_ema=True. This is the momentum to use when computing the EMA of the model's weights:
new_average = ema_momentum * old_average + (1 - ema_momentum) * current_variable_value.
ema_overwrite_frequencysteps of iterations, we overwrite the model variable by its moving average. If None, the optimizer does not overwrite model variables in the middle of training, and you need to explicitly overwrite the variables at the end of training by calling
optimizer.finalize_variable_values()(which updates the model variables in-place). When using the built-in
fit()training loop, this happens automatically after the last epoch, and you don't need to do anything.
tf.experimental.dtensor.Meshinstance. When provided, the optimizer will be run in DTensor mode, e.g. state tracking variable will be a DVariable, and aggregation/reduction will happen in the global DTensor context.