Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras input_shape for conv2d and manually loaded images

I am manually creating my dataset from a number of 384x286 b/w images.

I load an image like this:

x = []
for f in files:
        img = Image.open(f)
        img.load()
        data = np.asarray(img, dtype="int32")
        x.append(data)
x = np.array(x)

this results in x being an array (num_samples, 286, 384)

print(x.shape) => (100, 286, 384)

reading the keras documentation, and checking my backend, i should provide to the convolution step an input_shape composed by ( rows, cols, channels )

since i don't arbitrarily know the sample size, i would have expected to pass as an input size, something similar to

( None, 286, 384, 1 )

the model is built as follows:

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
# other steps...

passing as input_shape (286, 384, 1) causes:

Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (85, 286, 384)

passing as_input_shape (None, 286, 384, 1 ) causes:

Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5

what am i doing wrong ? how do i have to reshape the input array?

like image 586
Stormsson Avatar asked May 10 '17 14:05

Stormsson


3 Answers

Set the input_shape to (286,384,1). Now the model expects an input with 4 dimensions. This means that you have to reshape your image with .reshape(n_images, 286, 384, 1). Now you have added an extra dimension without changing the data and your model is ready to run. Basically, you need to reshape your data to (n_images, x_shape, y_shape, channels).

The cool thing is that you also can use an RGB-image as input. Just change channels to 3.

Check also this answer: Keras input explanation: input_shape, units, batch_size, dim, etc

Example

import numpy as np
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.core import Flatten, Dense, Activation
from keras.utils import np_utils

#Create model
model = Sequential()
model.add(Convolution2D(32, kernel_size=(3, 3), activation='relu', input_shape=(286,384,1)))
model.add(Flatten())
model.add(Dense(2))
model.add(Activation('softmax'))

model.compile(loss='binary_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy'])

#Create random data
n_images=100
data = np.random.randint(0,2,n_images*286*384)
labels = np.random.randint(0,2,n_images)
labels = np_utils.to_categorical(list(labels))

#add dimension to images
data = data.reshape(n_images,286,384,1)

#Fit model
model.fit(data, labels, verbose=1)
like image 100
Wilmar van Ommeren Avatar answered Oct 14 '22 21:10

Wilmar van Ommeren


your input_shape dimension is correct i.e input_shape(286, 384, 1)

reshape your input_image to 4D [batch_size, img_height, img_width, number_of_channels]

input_image=input_image.reshape(85,286, 384,1)

during

model.fit(input_image,label)
like image 37
thefifthjack005 Avatar answered Oct 14 '22 20:10

thefifthjack005


I think following might resolve your error.

  1. input_shape we provide to first conv2d (first layer of sequential model) should be something like (286,384,1) or (width,height,channels). No need of "None" dimension for batch_size in it.

  2. Shape of your input can be (batch_size,286,384,1)

Does this help you ??

like image 40
Harsha Pokkalla Avatar answered Oct 14 '22 19:10

Harsha Pokkalla