Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Use pretrained model with different input shape and class model

I am working on a classification problem using CNN where my input image size is 64X64 and I want to use pretrained model such as VGG16,COCO or any other. But the problem is input image size of pretrained model is 224X224. How do I sort this issue. Is there any data augmentation way for input image size.

If I resize my input image to 224X224 then there is very high chance of image will get blurred and that may impact the training. Please correct me if I am wrong.

Another question is related to pretrained model. If I am using transfer learning then generally how layers I have to freeze from pretrained model. Considering my classification is very different from pretrained model classes. But I guess first few layers we can freeze it to get the edges, curve etc.. of the images which is very common in all the images.

like image 577
Pankaj Kumar Avatar asked Sep 01 '18 18:09

Pankaj Kumar


1 Answers

But the problem is input image size of pretrained model is 224X224.

I assume you work with Keras/Tensorflow (It's the same for other DL frameworks). According to the docs in the Keras Application:

input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 48. E.g. (200, 200, 3) would be one

So there are two options to solve your issue:

  1. Resize your input image to 244*244 by existing library and use VGG classifier [include_top=True].

  2. Train your own classifier on top of the VGG models. As mentioned in the above documentation in Keras if your image is different than 244*244, you should train your own classifier [include_top=False]. You can do such things easily with:

     inp = keras.layers.Input(shape=(64, 64, 3), name='image_input')
    
     vgg_model = VGG19(weights='imagenet', include_top=False)
     vgg_model.trainable = False
    
     x = keras.layers.Flatten(name='flatten')(vgg_model)
     x = keras.layers.Dense(512, activation='relu', name='fc1')(x)
     x = keras.layers.Dense(512, activation='relu', name='fc2')(x)
     x = keras.layers.Dense(10, activation='softmax', name='predictions')(x)
     new_model = keras.models.Model(inputs=inp, outputs=x)
     new_model.compile(optimizer='adam', loss='categorical_crossentropy', 
                       metrics=['accuracy'])
    

If I am using transfer learning then generally how layers I have to freeze from pretrained model

It is really depend on what your new task, how many training example you have, whats your pretrained model, and lots of other things. If I were you, I first throw away the pretrained model classifier. Then, If not worked, remove some other Convolution layer and do it step by step until I get good performance.

like image 136
Amir Avatar answered Oct 21 '22 08:10

Amir