For now I'm using early stopping in Keras like this:
X,y= load_data('train_data')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=12)
datagen = ImageDataGenerator(
horizontal_flip=True,
vertical_flip=True)
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
epochs=n_epochs, callbacks=[early_stopping_callback])
But at the end of model.fit_generator
it will save model after epochs_to_wait_for_improve
, but I want to save model with min val_loss
does it make sense and is it possible?
As far as I remember, Early stopping does not save any model automatically. The EarlyStopping class has a parameter restore_best_weights , but this is just about restoring the weights of your final neural network (if I remember correctly).
Early stopping is a method that allows you to specify an arbitrarily large number of training epochs and stop training once the model performance stops improving on the validation dataset.
save_best_only: if save_best_only=True , it only saves when the model is considered the "best" and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn't contain formatting options like {epoch} then filepath will be overwritten by each new better model.
Patience. Set the Patience that you want early stopping to use. This is the number of epochs without improvement after which training will be early stopped. A larger patience means that an experiment will wait longer before stopping an experiment.
Yes, it's possible with one more callback, here is the code:
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
checkpoint_callback = ModelCheckpoint(model_name+'.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
epochs=n_epochs, callbacks=[early_stopping_callback, checkpoint_callback])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With