Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do we need to include_top=False if we need to change the input_shape?

As far as I know, the input tuple enters from the convolution blocks. So if we want to change the input_tuple shape, modifying convolutions would make sense. Why do we need to include_top=False and remove the fully connected layers at the end?

On the other hand, if we have different number of classes,Keras has an option to change the softmax layer using no_of_classes

I know that I am the one missing something here. Please help me

Example: For Inception Resnet V2

input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (299, 299, 3) (with 'channels_last' data format) or (3, 299, 299) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 139. E.g. (150, 150, 3) would be one valid value.

include_top: whether to include the fully-connected layer at the top of the network.

https://keras.io/applications/#inceptionresnetv2

like image 690
user239457 Avatar asked Dec 22 '25 10:12

user239457


1 Answers

This is simply because the fully connected layers at the end can only take fixed size inputs, which has been previously defined by the input shape and all processing in the convolutional layers. Any change to the input shape will change the shape of the input to the fully connected layers, making the weights incompatible (matrix sizes don't match and cannot be applied).

This is a specific problem to fully connected layers. If you use another layer for classification, such as global average pooling, then one would not have this problem.

like image 140
Dr. Snoopy Avatar answered Dec 24 '25 10:12

Dr. Snoopy



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!