I'm trying to train a model suggested by this research paper where I set half of the filters of a convolution layer to Gabor filters and the rest are random weights which are initialized by default. Normally, if I have to set a layer as not trainable, I set the trainable
attribute as False
. But here I have to freeze only half of the filters of a layer and I have no idea how to do so. Any help would be really appreciated. I'm using Keras with Tensorflow backend.
Recursive setting of the trainable attributeIf you set trainable = False on a model or on any layer that has sublayers, all children layers become non-trainable as well.
There are no trainable parameters in a max-pooling layer.
trainable to False moves all the layer's weights from trainable to non-trainable. This is called "freezing" the layer: the state of a frozen layer won't be updated during training (either when training with fit() or when training with any custom loop that relies on trainable_weights to apply gradient updates).
So the batch normalization layer is highly probably the reason that your custom network has non-trainable paramteres.
How about making two convolutional layers which are getting the same input and (nearly) the same parameters? So one of them is trainable wir random weights at initialization and the other layer is non trainable with the gabor filters.
You could then merge the outputs of the two layers together in a way that it looks like it's the output from one convolutional network.
Here is an example for demonstration (you need to use Keras functional API):
n_filters = 32
my_input = Input(shape=...)
conv_freezed = Conv2D(n_filters/2, (3,3), ...)
conv_trainable = Conv2D(n_filters/2, (3,3), ...)
conv_freezed_out = conv_freezed(my_input)
conv_trainable_out = conv_trainable(my_input)
conv_out = concatenate([conv_freezed_out, conv_trainable_out])
# set weights and freeze the layer
conv_freezed.set_weights(...)
conv_freezed.trainable = False
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With