Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to give variable size images as input in keras

I am writing a code for image classification for two classes using keras with tensorflow backend. My images are stored in folder in computer and i want to give these images as input to my keras model. load_img takes only one input image so i have to use either flow(x,y) or flow_from_directory(directory), but in flow(x,y) we need to also provide labels which is length task so i am using flow_from_directory(directory). My images are of variable sizes like 20*40, 55*43..... but here it is mentioned that fixed target_size is required. In this solution it is given that we can give variable size images as input to convolution layer using input_shape=(1, None, None) or input_shape=(None,None,3) (channel last and color images) but fchollet mention that it is not useful for flatten layer and my model consist both convolution and flatten layers. In that post only moi90 suggest that try different batches but every batch should have images with same size, but it is not possible me to group images with same sizes because my data is very scatter. So i decided to go with batch size=1 and write following code:

from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras import backend as K
import numpy as np
from keras.preprocessing.image import ImageDataGenerator

input_shape = (None,None,3)

model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.get_weights()
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))

model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])

train_datagen = ImageDataGenerator()
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory('/data/train', target_size=input_shape, batch_size=1,class_mode='binary') 
validation_generator = test_datagen.flow_from_directory('/data/test',target_size=input_shape,batch_size=1,class_mode='binary')
model.fit_generator(train_generator,steps_per_epoch=1,epochs=2,validation_data=validation_generator,validation_steps=1)

Now i am getting following error:

Traceback (most recent call last):

  File "<ipython-input-8-4e22d22e4bd7>", line 23, in <module>
    model.add(Flatten())

  File "/home/nd/anaconda3/lib/python3.6/site-packages/keras/models.py", line 489, in add
    output_tensor = layer(self.outputs[0])

  File "/home/nd/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 622, in __call__
    output_shape = self.compute_output_shape(input_shape)

  File "/home/nd/anaconda3/lib/python3.6/site-packages/keras/layers/core.py", line 478, in compute_output_shape
    '(got ' + str(input_shape[1:]) + '. '

ValueError: The shape of the input to "Flatten" is not fully defined (got (None, None, 16). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.

I am sure it is not because of img_dim_ordering and backend but because of this i have checked both are th Please help to correct he code or help how i can give variable size images as input to my model.

like image 203
Hitesh Avatar asked Dec 13 '17 14:12

Hitesh


3 Answers

You can train variable sizes, as long as you don't try to put variable sizes in a numpy array.

But some layers do not support variable sizes, and Flatten is one of them. It's impossible to train models containing Flatten layers with variable sizes.

You can try, though, to replace the Flatten layer with either a GlobalMaxPooling2D or a GlobalAveragePooling2D layer. But these layers may condense too much information into a small data, so it might be necessary to add more convolutions with more channels before them.

You must make sure that your generator will produce batches containing images of the same size, though. The generator will fail when trying to put two or more images with different sizes in the same numpy array.

like image 57
Daniel Möller Avatar answered Oct 10 '22 06:10

Daniel Möller


See the answer in https://github.com/keras-team/keras/issues/1920 Yo you should change the input to be:

input = Input(shape=(None, None,3))

The in the end add GlobalAveragePooling2D():

Try something like that ...

input = Input(shape=(None, None,3))

model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=(None, None,3)))  #Look on the shape
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# IMPORTANT !
model add(GlobalAveragePooling2D())
# IMPORTANT !
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))

model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
like image 23
Dror Hilman Avatar answered Oct 10 '22 06:10

Dror Hilman


Unfortunately you can't train a neural network with various size images as it is. You have to resize all images to a given size. Fortunately you don't have to do this in your hard drive, permanently by keras does this for you on hte fly.

Inside your flow_from_directory you should define a target_size like this:

train_generator = train_datagen.flow_from_directory(
    'data/train',
    target_size=(150, 150), #every image will be resized to (150,150) before fed to neural network
    batch_size=32,
    class_mode='binary')

Also, if you do so, you can have whatever batch size you want.

like image 3
Ioannis Nasios Avatar answered Oct 10 '22 06:10

Ioannis Nasios