My fit function is nonconvex so that the loss function can sometimes worsen before it improves. Given this, I want to use early stopping, but only after the first 100 or more epochs. So far I have this:
# Early stopping
ES = [EarlyStopping(monitor='val_loss',patience=100,verbose=1,mode='auto')]
# fit model
history = model.fit(x_train, y_train, epochs=1000,batch_size=50,verbose=2,shuffle=True,validation_split=.1,callbacks=ES)
Unfortunately, the fit stops very early after 10 or so epochs. I want to wait until the 100th epoch to start early stopping. Any ideas? Any suggestions other than early stopping are also appreciated.
If you use patience=100
your training should not stop before epoch 100. However, if you want to have a short patience
but also want it to start later, you could use the method described by colllin. If you want further customization, you can always define your own callback, here with EarlyStopping
as a parent. For your purposes you just need to override the initializer and the on_epoch_end
method found here:
class CustomStopper(keras.callbacks.EarlyStopping):
def __init__(self, monitor='val_loss',
min_delta=0, patience=0, verbose=0, mode='auto', start_epoch = 100): # add argument for starting epoch
super(CustomStopper, self).__init__()
self.start_epoch = start_epoch
def on_epoch_end(self, epoch, logs=None):
if epoch > self.start_epoch:
super().on_epoch_end(epoch, logs)
You only have to give the earliest epoch from where you want to monitor your criterion to the initializer and check the condition before calling the function from the parent class.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With