Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Accessing validation data within a custom callback

I'm fitting a train_generator and by means of a custom callback I want to compute custom metrics on my validation_generator. How can I access params validation_steps and validation_data within a custom callback? It’s not in self.params, can’t find it in self.model either. Here's what I'd like to do. Any different approach'd be welcomed.

model.fit_generator(generator=train_generator,                     steps_per_epoch=steps_per_epoch,                     epochs=epochs,                     validation_data=validation_generator,                     validation_steps=validation_steps,                     callbacks=[CustomMetrics()])   class CustomMetrics(keras.callbacks.Callback):      def on_epoch_end(self, batch, logs={}):                 for i in validation_steps:              # features, labels = next(validation_data)              # compute custom metric: f(features, labels)          return 

keras: 2.1.1

Update

I managed to pass my validation data to a custom callback's constructor. However, this results in an annoying "The kernel appears to have died. It will restart automatically." message. I doubt if this is the right way to do it. Any suggestion?

class CustomMetrics(keras.callbacks.Callback):      def __init__(self, validation_generator, validation_steps):         self.validation_generator = validation_generator         self.validation_steps = validation_steps       def on_epoch_end(self, batch, logs={}):          self.scores = {             'recall_score': [],             'precision_score': [],             'f1_score': []         }          for batch_index in range(self.validation_steps):             features, y_true = next(self.validation_generator)                         y_pred = np.asarray(self.model.predict(features))             y_pred = y_pred.round().astype(int)              self.scores['recall_score'].append(recall_score(y_true[:,0], y_pred[:,0]))             self.scores['precision_score'].append(precision_score(y_true[:,0], y_pred[:,0]))             self.scores['f1_score'].append(f1_score(y_true[:,0], y_pred[:,0]))         return  metrics = CustomMetrics(validation_generator, validation_steps)  model.fit_generator(generator=train_generator,                     steps_per_epoch=steps_per_epoch,                     epochs=epochs,                     validation_data=validation_generator,                     validation_steps=validation_steps,                     shuffle=True,                     callbacks=[metrics],                     verbose=1) 
like image 803
w00dy Avatar asked Dec 06 '17 14:12

w00dy


1 Answers

You can iterate directly over self.validation_data to aggregate all the validation data at the end of each epoch. If you want to calculate precision, recall and F1 across the complete validation dataset:

# Validation metrics callback: validation precision, recall and F1 # Some of the code was adapted from https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2 class Metrics(callbacks.Callback):      def on_train_begin(self, logs={}):         self.val_f1s = []         self.val_recalls = []         self.val_precisions = []      def on_epoch_end(self, epoch, logs):         # 5.4.1 For each validation batch         for batch_index in range(0, len(self.validation_data)):             # 5.4.1.1 Get the batch target values             temp_targ = self.validation_data[batch_index][1]             # 5.4.1.2 Get the batch prediction values             temp_predict = (np.asarray(self.model.predict(                                 self.validation_data[batch_index][0]))).round()             # 5.4.1.3 Append them to the corresponding output objects             if(batch_index == 0):                 val_targ = temp_targ                 val_predict = temp_predict             else:                 val_targ = np.vstack((val_targ, temp_targ))                 val_predict = np.vstack((val_predict, temp_predict))          val_f1 = round(f1_score(val_targ, val_predict), 4)         val_recall = round(recall_score(val_targ, val_predict), 4)         val_precis = round(precision_score(val_targ, val_predict), 4)          self.val_f1s.append(val_f1)         self.val_recalls.append(val_recall)         self.val_precisions.append(val_precis)          # Add custom metrics to the logs, so that we can use them with         # EarlyStop and csvLogger callbacks         logs["val_f1"] = val_f1         logs["val_recall"] = val_recall         logs["val_precis"] = val_precis          print("— val_f1: {} — val_precis: {} — val_recall {}".format(                  val_f1, val_precis, val_recall))         return  valid_metrics = Metrics() 

Then you can add valid_metrics to the callback argument:

your_model.fit_generator(..., callbacks = [valid_metrics]) 

Be sure to put it at the beginning of the callbacks in case you want other callbacks to use these measures.

like image 151
Verdant89 Avatar answered Oct 07 '22 14:10

Verdant89