Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ReduceLROnPlateau fallback to the previous weights with the minimum acc_loss

Tags:

I'm using ReduceLROnPlateau as fit callback to reduce the LR, I'm using patiente=10 so when the reduction of the LR is triggered the model could be far from the best weights.

Is there a way to go back to the minimum acc_loss and start the training again from that point with the new LR?

Have it sense?

I can do manually using EarlyStopping and ModelCheckpoint('best.hdf5', save_best_only=True, monitor='val_loss', mode='min') callbacks, but I don't know if it have sense.

like image 837
Mquinteiro Avatar asked Sep 07 '18 17:09

Mquinteiro


2 Answers

Here's a working example following @nuric's direction:

from tensorflow.python.keras.callbacks import ReduceLROnPlateau
from tensorflow.python.platform import tf_logging as logging

class ReduceLRBacktrack(ReduceLROnPlateau):
    def __init__(self, best_path, *args, **kwargs):
        super(ReduceLRBacktrack, self).__init__(*args, **kwargs)
        self.best_path = best_path

    def on_epoch_end(self, epoch, logs=None):
        current = logs.get(self.monitor)
        if current is None:
            logging.warning('Reduce LR on plateau conditioned on metric `%s` '
                            'which is not available. Available metrics are: %s',
                             self.monitor, ','.join(list(logs.keys())))
        if not self.monitor_op(current, self.best): # not new best
            if not self.in_cooldown(): # and we're not in cooldown
                if self.wait+1 >= self.patience: # going to reduce lr
                    # load best model so far
                    print("Backtracking to best model before reducting LR")
                    self.model.load_weights(self.best_path)

        super().on_epoch_end(epoch, logs) # actually reduce LR

ModelCheckpoint call-back can be used to update the best model dump. e.g. pass the following two call-backs to model fit:

model_checkpoint_path = <path to checkpoint>
c1 = ModelCheckpoint(model_checkpoint_path, 
                     save_best_only=True,
                     monitor=...)
c2 = ReduceLRBacktrack(best_path=model_checkpoint_path, monitor=...)
like image 65
Daugmented Avatar answered Sep 28 '22 17:09

Daugmented


You could create a custom callback inheriting from ReduceLROnPlateau, something along the lines of:

class CheckpointLR(ReduceLROnPlateau):
   # override on_epoch_end()
   def on_epoch_end(self, epoch, logs=None):
     if not self.in_cooldown():
       temp = self.model.get_weights()
       self.model.set_weights(self.last_weights)
       self.last_weights = temp
     super().on_epoch_end(epoch, logs) # actually reduce LR
like image 23
nuric Avatar answered Sep 28 '22 17:09

nuric