Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Return number of epochs for EarlyStopping callback in Keras

Is there any way to return the number of epochs after which the training was stopped in Keras when using the EarlyStopping callback?

I can get the log of the training and validation loss and compute the number of epochs myself using the patience parameter, but is there a more direct way?

like image 522
AlexGuevara Avatar asked Apr 16 '18 07:04

AlexGuevara


People also ask

What is EarlyStopping in Keras?

EarlyStopping class Stop training when a monitored metric has stopped improving. Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be 'loss' , and mode would be 'min' .

How do I choose my epochs number?

The right number of epochs depends on the inherent perplexity (or complexity) of your dataset. A good rule of thumb is to start with a value that is 3 times the number of columns in your data. If you find that the model is still improving after all epochs complete, try again with a higher value.

What is Keras callback ModelCheckpoint used for?

ModelCheckpoint callback is used in conjunction with training using model. fit() to save a model or weights (in a checkpoint file) at some interval, so the model or weights can be loaded later to continue the training from the state saved.

What is the average loss for epoch 14 in keras?

The average loss for epoch 14 is 2.76 and mean absolute error is 1.33. <tensorflow.python.keras.callbacks.History at 0x7f4368e52ba8> Be sure to check out the existing Keras callbacks by reading the API docs . Applications include logging to CSV, saving the model, visualizing metrics in TensorBoard, and a lot more!

What is a keras callback?

Description: Complete guide to writing new Keras callbacks. A callback is a powerful tool to customize the behavior of a Keras model during training, evaluation, or inference.

What are logs in keras?

Within this method, logs is a dict containing the metrics results. Called at the beginning of an epoch during training. Called at the end of an epoch during training. Let's take a look at a concrete example. To get started, let's import tensorflow and define a simple Sequential Keras model:

What is the average data loss for keras?

Up to batch 4, the average loss is 2.76. The average loss for epoch 14 is 2.76 and mean absolute error is 1.33. <tensorflow.python.keras.callbacks.History at 0x7f4368e52ba8> Be sure to check out the existing Keras callbacks by reading the API docs .


3 Answers

Use EarlyStopping.stopped_epoch attribute: remember the callback in a separate variable, say callback, and check callback.stopped_epoch after the training stopped.

like image 188
Maxim Avatar answered Oct 07 '22 16:10

Maxim


You can also leverage History() call back to find out the number of epochs it was ran for. Ex:

from keras.callbacks import History, EarlyStopping

history = History()
callback = [history, EarlyStopping(monitor='val_loss', patience=5, verbose=1, min_delta=1e-4)]

history = model.fit_generator(...., callbacks=callbacks)
number_of_epochs_it_ran = len(history.history['loss'])
like image 30
Neeraj Komuravalli Avatar answered Oct 07 '22 15:10

Neeraj Komuravalli


Subtracting the patience value from the total number of epochs - as suggested in this comment - might not work in some situations. For instance, if you set epochs=100 and patience=20, if the best accuracy/loss value is found at epoch 90, the training will stop at epoch 100. So with this approach you would get a wrong number (100-20 = 80).

Moreover, as noted in this comment, using EarlyStopping.stopped_epoch only gives you the epoch when the training has been stopped, but NOT the epoch when the best weights are saved. This is particularly useful when you set save_best_weights=True or rely on ModelCheckpoint to save the best model before stopping the training.

Therefore my solution is to get the index of model history array, with the best value. Assuming that the metric used is the validation accuracy, relying on numpy, here is some code:

import numpy as np

model.fit(...)
hist = model.history.history['val_acc']
n_epochs_best = np.argmax(hist)
like image 35
Vito Gentile Avatar answered Oct 07 '22 16:10

Vito Gentile