I am training a neural network with Keras. I set num_epochs
to a high number and let EarlyStopping
terminate training.
model = Sequential()
model.add(Dense(1, input_shape=(nFeatures,), activation='linear'))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mse', 'mae'])
early_stopping_monitor = EarlyStopping(monitor='val_loss', patience=15, verbose=1, mode='auto')
checkpointer = ModelCheckpoint(filepath = fname_saveWeights, verbose=1, save_best_only=True)
seqModel = model.fit(X_train, y_train, batch_size=4, epochs=num_epochs, validation_data=(X_test, y_test), shuffle=True, callbacks=[early_stopping_monitor, checkpointer], verbose=2)
This works fine. However, I then attempt to plot the loss function:
val_loss = seqModel.history['val_loss']
xc = range(num_epochs)
plt.figure()
plt.plot(xc, val_loss)
plt.show()
I am attempting to plot the range of num-epochs
(xc) but EarlyStopping
ends much earlier, so I have an error in shapes.
How can I detect at what epoch EarlyStopping ended to solve the mismatch?
Verbose setting prints the ending epoch to screen, but I cannot determine how to access the value to use in the plot.
EarlyStopping class Stop training when a monitored metric has stopped improving. Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be 'loss' , and mode would be 'min' .
Patience. Set the Patience that you want early stopping to use. This is the number of epochs without improvement after which training will be early stopped. A larger patience means that an experiment will wait longer before stopping an experiment.
The right number of epochs depends on the inherent perplexity (or complexity) of your dataset. A good rule of thumb is to start with a value that is 3 times the number of columns in your data. If you find that the model is still improving after all epochs complete, try again with a higher value.
After about 50 epochs the test error begins to increase as the model has started to 'memorise the training set', despite the training error remaining at its minimum value (often training error will continue to improve).
It is set (code) as a field inside the callback:
early_stopping_monitor.stopped_epoch
will give you the epoch it stopped at after training or 0 if it didn't early stop.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With