EarlyStopping
classtf_keras.callbacks.EarlyStopping(
monitor="val_loss",
min_delta=0,
patience=0,
verbose=0,
mode="auto",
baseline=None,
restore_best_weights=False,
start_from_epoch=0,
)
Stop training when a monitored metric has stopped improving.
Assuming the goal of a training is to minimize the loss. With this, the
metric to be monitored would be 'loss'
, and mode would be 'min'
. A
model.fit()
training loop will check at end of every epoch whether
the loss is no longer decreasing, considering the min_delta
and
patience
if applicable. Once it's found no longer decreasing,
model.stop_training
is marked True and the training terminates.
The quantity to be monitored needs to be available in logs
dict.
To make it so, pass the loss or metrics at model.compile()
.
Arguments
{"auto", "min", "max"}
. In min
mode,
training will stop when the quantity
monitored has stopped decreasing; in "max"
mode it will stop when the quantity
monitored has stopped increasing; in "auto"
mode, the direction is automatically inferred
from the name of the monitored quantity.baseline
. If no epoch
improves on baseline
, training will run for patience
epochs and restore weights from the best epoch in that set.Example
>>> callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
>>> # This callback will stop the training when there is no improvement in
>>> # the loss for three consecutive epochs.
>>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
>>> model.compile(tf.keras.optimizers.SGD(), loss='mse')
>>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
... epochs=10, batch_size=1, callbacks=[callback],
... verbose=0)
>>> len(history.history['loss']) # Only 4 epochs are run.
4