Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Set half of the filters of a layer as not trainable keras/tensorflow

I'm trying to train a model suggested by this research paper where I set half of the filters of a convolution layer to Gabor filters and the rest are random weights which are initialized by default. Normally, if I have to set a layer as not trainable, I set the trainable attribute as False. But here I have to freeze only half of the filters of a layer and I have no idea how to do so. Any help would be really appreciated. I'm using Keras with Tensorflow backend.

like image 821
Kalyan M Avatar asked Sep 20 '18 06:09

Kalyan M


People also ask

How do you make a layer non-trainable?

Recursive setting of the trainable attributeIf you set trainable = False on a model or on any layer that has sublayers, all children layers become non-trainable as well.

In which layer trainable weights are not present?

There are no trainable parameters in a max-pooling layer.

Why is trainable false?

trainable to False moves all the layer's weights from trainable to non-trainable. This is called "freezing" the layer: the state of a frozen layer won't be updated during training (either when training with fit() or when training with any custom loop that relies on trainable_weights to apply gradient updates).

Which of the following layers is non-trainable?

So the batch normalization layer is highly probably the reason that your custom network has non-trainable paramteres.


1 Answers

How about making two convolutional layers which are getting the same input and (nearly) the same parameters? So one of them is trainable wir random weights at initialization and the other layer is non trainable with the gabor filters.

You could then merge the outputs of the two layers together in a way that it looks like it's the output from one convolutional network.

Here is an example for demonstration (you need to use Keras functional API):

n_filters = 32

my_input = Input(shape=...)
conv_freezed = Conv2D(n_filters/2, (3,3), ...)
conv_trainable = Conv2D(n_filters/2, (3,3), ...)

conv_freezed_out = conv_freezed(my_input)
conv_trainable_out = conv_trainable(my_input)
conv_out = concatenate([conv_freezed_out, conv_trainable_out])

# set weights and freeze the layer
conv_freezed.set_weights(...)
conv_freezed.trainable = False
like image 55
dennis-w Avatar answered Sep 29 '22 03:09

dennis-w