tf.keras.optimizers.Ftrl( learning_rate=0.001, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, l2_shrinkage_regularization_strength=0.0, beta=0.0, weight_decay=None, clipnorm=None, clipvalue=None, global_clipnorm=None, use_ema=False, ema_momentum=0.99, ema_overwrite_frequency=None, jit_compile=True, name="Ftrl", **kwargs )
Optimizer that implements the FTRL algorithm.
"Follow The Regularized Leader" (FTRL) is an optimization algorithm developed at Google for click-through rate prediction in the early 2010s. It is most suitable for shallow models with large and sparse feature spaces. The algorithm is described by McMahan et al., 2013. The Keras version has support for both online L2 regularization (the L2 regularization described in the paper above) and shrinkage-type L2 regularization (which is the addition of an L2 penalty to the loss function).
n = 0 sigma = 0 z = 0
Update rule for one variable
prev_n = n n = n + g ** 2 sigma = (n ** -lr_power - prev_n ** -lr_power) / lr z = z + g - sigma * w if abs(z) < lambda_1: w = 0 else: w = (sgn(z) * lambda_1 - z) / ((beta + sqrt(n)) / alpha + lambda_2)
lris the learning rate
gis the gradient for the variable
lambda_1is the L1 regularization strength
lambda_2is the L2 regularization strength
lr_poweris the power to scale n.
Check the documentation for the
parameter for more details when shrinkage is enabled, in which case gradient
is replaced with a gradient with shrinkage.
Tensor, floating point value, a schedule that is a
tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.001.
use_ema=True. This is the momentum to use when computing the EMA of the model's weights:
new_average = ema_momentum * old_average + (1 - ema_momentum) * current_variable_value.
ema_overwrite_frequencysteps of iterations, we overwrite the model variable by its moving average. If None, the optimizer does not overwrite model variables in the middle of training, and you need to explicitly overwrite the variables at the end of training by calling
optimizer.finalize_variable_values()(which updates the model variables in-place). When using the built-in
fit()training loop, this happens automatically after the last epoch, and you don't need to do anything.