I am trying learn some Keras syntax and playing with the Inception v3 example
I have a 4-class multiclass classification toy problem so I changed the following lines from the example:
NB_CLASS = 4 # number of classes
DIM_ORDERING = 'tf' # 'th' (channels, width, height) or 'tf' (width, height, channels)
My toy datasets have the following dimensions:
I then try to train the model with the following code:
# fit the model on the batches generated by datagen.flow()
# https://github.com/fchollet/keras/issues/1627
# http://keras.io/models/sequential/#sequential-model-methods
checkpointer = ModelCheckpoint(filepath="/tmp/weights.hdf5", verbose=1, save_best_only=True)
model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=32),
nb_epoch=10,
samples_per_epoch=32,
class_weight=None, #classWeights,
verbose=2,
validation_data=(X_test, Y_test),
callbacks=[checkpointer])
Then I get the following error:
Exception: The model expects 2 input arrays, but only received one array. Found: array with shape (179, 4)`
Which probably relates to this as Inception would like to have the auxiliary classifiers (Szegedy et al., 2014):
model = Model(input=img_input, output=[preds, aux_preds])
How do I give the two labels to the model in Keras being not an advanced Python programmer either?
Inception V3 is a type of Convolutional Neural Networks. It consists of many convolution and max pooling layers. Finally, it includes fully connected neural networks. However, you do not have to know its structure by heart. Keras would handle it instead of us.
None (default) means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.
The default input image size of Inception-v3 is 299×299; however, the image size in the dataset was 224×224.
I recommend you try first with this tutorial. The code can be found here.
You will see in the first part of it, it shows how to load data from a directory using:
.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
In order to input different classes, you will have to put your images in one folder per class (note that there is probably another way of doing it, by passing the labels). Also note that in your case class_mode can't use 'binary' (I think you should use 'categorical'):
`"binary"`: binary targets (if there are only two classes),
`"categorical"`: categorical targets,
Then you can use the inceptionv3 model that's already in Keras:
from keras.applications import InceptionV3
cnn = InceptionV3(...)
Also note that you have too few examples to train InceptionV3, as this model is very big (check here the size). What you could do in this case is transfer learning, using pre-trained weights on InceptionV3. See section Using the bottleneck features of a pre-trained network: 90% accuracy in a minute in the tutorial.
The error message relates to the validation_data
argument: as you are using model.fit_generator
the validation data should also be passed in via an ImageDataGenerator
object (like you are already doing for the training data). It is not related to the lack of an auxiliary classifier - the Inception v3 model in Keras does not implement the auxiliary classifier from the original paper (this is another reason to try transfer learning, rather than full training).
Update your code to supply the validation data using a generator :
datagen = ImageDataGenerator()
model.fit_generator(datagen.flow(X_train, Y_train, batch_size=32),
nb_epoch=10,
steps_per_epoch=len(X_train) / 32,
class_weight=None,
verbose=2,
validation_data=datagen.flow(X_test, Y_test, batch_size=32),
validation_steps=len(X_test) / 32,
callbacks=[checkpointer])
Note that I have updated the argument samples_per_epoch
to the newer steps_per_epoch
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With