Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras Callback EarlyStopping comparing training and validation loss

Tags:

I'm fitting a neural network in Python Keras.

To avoid overfitting I would like to monitor the training/validation loss and create a proper callback which stops computations when training loss is too much less than validation loss.

An example of a callback is:

callback = [EarlyStopping(monitor='val_loss', value=45, verbose=0, mode='auto')]

Is there any way to stop training when training loss too little compared to validation loss?

Thank you in advance

like image 277
Tommaso Guerrini Avatar asked Feb 26 '17 15:02

Tommaso Guerrini


People also ask

Should validation loss be greater than training?

Symptoms: validation loss is consistently lower than the training loss, the gap between them remains more or less the same size and training loss has fluctuations. Dropout penalizes model variance by randomly freezing neurons in a layer during model training.

Why is the training loss much higher than the testing loss Keras?

Why is my training loss much higher than my testing loss? A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time. They are reflected in the training time loss but not in the test time loss.

What Keras restore weights best?

restore_best_weights: Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the baseline .

What if validation loss is less than training loss?

The second reason you may see validation loss lower than training loss is due to how the loss value are measured and reported: Training loss is measured during each epoch. While validation loss is measured after each epoch.


1 Answers

You can create a custom callback class for your purpose.

I have created one that should correspond to your need :

class CustomEarlyStopping(Callback):
    def __init__(self, ratio=0.0,
                 patience=0, verbose=0):
        super(EarlyStopping, self).__init__()

        self.ratio = ratio
        self.patience = patience
        self.verbose = verbose
        self.wait = 0
        self.stopped_epoch = 0
        self.monitor_op = np.greater

    def on_train_begin(self, logs=None):
        self.wait = 0  # Allow instances to be re-used

    def on_epoch_end(self, epoch, logs=None):
        current_val = logs.get('val_loss')
        current_train = logs.get('loss')
        if current_val is None:
            warnings.warn('Early stopping requires %s available!' %
                          (self.monitor), RuntimeWarning)

        # If ratio current_loss / current_val_loss > self.ratio
        if self.monitor_op(np.divide(current_train,current_val),self.ratio):
            self.wait = 0
        else:
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True
            self.wait += 1

    def on_train_end(self, logs=None):
        if self.stopped_epoch > 0 and self.verbose > 0:
            print('Epoch %05d: early stopping' % (self.stopped_epoch))

I took the liberty to interpret that you wanted to stop if the ratio between the train_loss and the validation_loss goes under a certain ratio threshold. This ratio argument should be between 0.0 and 1.0. However, 1.0 is dangerous as the validation loss and the training loss might fluctuate a lot in an erratic way at the beginning of the training.

You can add a patience argument which will wait to see if the breaking of your threshold is staying for a certain number of epochs.

The way to use this is for exampe :

callbacks = [CustomEarlyStopping(ratio=0.5, patience=2, verbose=1), 
            ... Other callbacks ...]
...
model.fit(..., callbacks=callbacks)

In this case it will stop if the training loss stays lower than 0.5*val_loss for more than 2 epochs.

Does that help you?

like image 197
Nassim Ben Avatar answered Sep 25 '22 10:09

Nassim Ben