I want to get pretrained VGG16 model in Keras, remove its output layer, and then put a new output layer with the number of classes suited for my problem, and then to fit it on new data. For this reason, I am trying to use the model here: https://keras.io/applications/#vgg16, but since it is not Sequential, I cannot just model.pop()
. Popping from layers and adding it also does not work, because in the predictions it still expects the old shape. How would I do that? Is there a way to convert this type of model to Sequential
?
You can use pop()
on model.layers
and then use model.layers[-1].output
to create new layers.
Example:
from keras.models import Model
from keras.layers import Dense,Flatten
from keras.applications import vgg16
from keras import backend as K
model = vgg16.VGG16(weights='imagenet', include_top=True)
model.input
model.summary(line_length=150)
model.layers.pop()
model.layers.pop()
model.summary(line_length=150)
new_layer = Dense(10, activation='softmax', name='my_dense')
inp = model.input
out = new_layer(model.layers[-1].output)
model2 = Model(inp, out)
model2.summary(line_length=150)
Alternatively, you can use include_top=False
option of these models. In this case if you need to use flatten the layer then you need to pass the input_shape
also.
model3 = vgg16.VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model3.summary(line_length=150)
flatten = Flatten()
new_layer2 = Dense(10, activation='softmax', name='my_dense_2')
inp2 = model3.input
out2 = new_layer2(flatten(model3.output))
model4 = Model(inp2, out2)
model4.summary(line_length=150)
We can transform VGG model into Sequential as:
# Create VGG model
vgg_model = keras.applications.vgg16.VGG16(weights='imagenet')
# Created model is of type Model
type(vgg_model)
>> keras.engine.training.Model
# Convert it to Sequential
model = Sequential()
for layer in vgg_model.layers:
model.add(layer)
# Now, check the model type, its Sequential!
type(model)
>> keras.models.Sequential
# Verify the model details
model.summary()
>>
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
# Now, that its sequential, we can perform usual operations.
model.layers.pop()
# Freeze the layers
for layer in model.layers:
layer.trainable = False
# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2, activation='softmax'))
# Check the summary, and yes new layer has been added.
model.summary()
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dense_4 (Dense) (None, 2) 2002
=================================================================
Total params: 134,262,546
Trainable params: 2,002
Non-trainable params: 134,260,544
_________________________________________________________________
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With