ReduceLROnPlateau
classtf_keras.callbacks.ReduceLROnPlateau(
monitor="val_loss",
factor=0.1,
patience=10,
verbose=0,
mode="auto",
min_delta=0.0001,
cooldown=0,
min_lr=0,
**kwargs
)
Reduce learning rate when a metric has stopped improving.
Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
Example
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
model.fit(X_train, Y_train, callbacks=[reduce_lr])
Arguments
new_lr = lr * factor
.{'auto', 'min', 'max'}
. In 'min'
mode,
the learning rate will be reduced when the
quantity monitored has stopped decreasing; in 'max'
mode it will be
reduced when the quantity monitored has stopped increasing; in
'auto'
mode, the direction is automatically inferred from the name
of the monitored quantity.