I am new to using keras for deep learning applications. I am trying to perform binary classification using pre-trained models. I am running the code in google colab where the tensorflow version is 2.2.0-rc2. The following is the model I am using.
vgg19_basemodel = tf.keras.applications.VGG19(include_top = False, weights='imagenet', input_shape=(IMSIZE,IMSIZE,3))
#vgg19_basemodel.summary()
x = vgg19_basemodel.output
x = tf.keras.layers.Conv2D(16, (3,3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D(2,2)(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(32, activation="relu")(x)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
for layer in vgg19_basemodel.layers:
layer.trainable = False
vgg19_model = tf.keras.Model(vgg19_basemodel.input, x)
vgg19_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LR), loss='binary_crossentropy', metrics=['accuracy'])
#vgg19_model.summary()
The following are the callbacks I am using.
class myCallBack(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('loss') <= EXLOSS and logs.get('accuracy') >= EXACC and logs.get('val_accuracy') >= VALACC):
print("\nCALLBAKC: TRAINING LOSS {} reached.".format(EXLOSS))
self.model.stop_training = True
ccall = myCallBack()
es = tf.keras.callbacks.EarlyStopping(monitor='loss', mode='min', min_delta=0.01, baseline = 0.01, patience=10, restore_best_weights=True)
I am training the model using the following:
d3_vgg19_history = vgg19_model.fit(d3_train_generator,
epochs=EPOCHS,
validation_data=d3_test_generator,
steps_per_epoch=d3_stepsize_train,
validation_steps=d3_stepsize_test,
callbacks=[ccall, es]
)
The custom callback doesn't produce any problem and stops training perfectly when used without early stopping.
However, if I set the restore_best_weights=True in early stopping, the following error is generated when epoch_number == patience.
If I set restore_best_weights=False, no problems occur and training ends successfully.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-38-f6a9ab9579ae> in <module>()
6 steps_per_epoch=d3_stepsize_train,
7 validation_steps=d3_stepsize_test,
----> 8 callbacks=[ccall, esd3]
9 )
4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
811 epoch_logs.update(val_logs)
812
--> 813 callbacks.on_epoch_end(epoch, epoch_logs)
814 if self.stop_training:
815 break
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
363 logs = self._process_logs(logs)
364 for callback in self.callbacks:
--> 365 callback.on_epoch_end(epoch, logs)
366
367 def on_train_batch_begin(self, batch, logs=None):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
1483 if self.verbose > 0:
1484 print('Restoring model weights from the end of the best epoch.')
-> 1485 self.model.set_weights(self.best_weights)
1486
1487 def on_train_end(self, logs=None):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in set_weights(self, weights)
1517 expected_num_weights += 1
1518
-> 1519 if expected_num_weights != len(weights):
1520 raise ValueError(
1521 'You called `set_weights(weights)` on layer "%s" '
TypeError: object of type 'NoneType' has no len()
I have tested the early stopping in other pre-trained models, namely: vgg16, denset201, resnet, xception, inception, etc. However, the problem with the EarlyStopping persists and the same errors pop up whenever restore_best_weights is set to True. Thanks in advance for helping me out in this case. Let me know if any other information is necessary.
Found the "problem". It makes sense it is None because it found no better model than the baseline in my case. I got rid of "baseline=1.0" and now it works for me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With