Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Train multi-class image classifier in Keras

I was following a tutorial to learn train a classifier using Keras

https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

Specifically, from the second script given by the author, I wanted to transform the script into a one that can train multi-class classifier(was a binary for cat and dog). I have 5 classes in my train folder so I did the following change:

In the function of train_top_model():

I changed

model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])

into

model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(5, activation='sigmoid'))

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

train_labels = to_categorical(train_labels, 5)
validation_labels = to_categorical(validation_labels, 5)

After done the training, the model reached training accuracy of near 99% but only for like 70% accuracy of the validation accuracy. Thus I started thinking maybe it's not that simple to convert 2 classes training to 5 classes. Maybe I need to use one-hot encoding when labeling the classes (but I don't know how)

EDIT:

I attached my fine-tuning script as well. Another problem: the accuracy did not effectively increase when fine-tuning starts.

import os
import h5py
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.layers import Activation, Dropout, Flatten, Dense

# path to the model weights files.
weights_path = 'D:/Users/EJLTZ/Desktop/vgg16_weights.h5'
top_model_weights_path = 'bottleneck_weights_2.h5'
# dimensions of our images.
img_width, img_height = 150, 150

train_data_dir = 'D:/Users/EJLTZ/Desktop/BodyPart-full/train_new'
validation_data_dir = 'D:/Users/EJLTZ/Desktop/BodyPart-full/validation_new'
nb_train_samples = 500
nb_validation_samples = 972
nb_epoch = 50

# build the VGG16 network
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=(3, img_width, img_height)))

model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

# load the weights of the VGG16 networks
# (trained on ImageNet, won the ILSVRC competition in 2014)
# note: when there is a complete match between your model definition
# and your weight savefile, you can simply call model.load_weights(filename)
assert os.path.exists(weights_path), 'Model weights not found (see "weights_path" variable in script).'
f = h5py.File(weights_path)
for k in range(f.attrs['nb_layers']):
    if k >= len(model.layers):
        # we don't look at the last (fully-connected) layers in the savefile
        break
    g = f['layer_{}'.format(k)]
    weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
    model.layers[k].set_weights(weights)
f.close()
print('Model loaded.')

# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(5, activation='softmax'))

# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)

# add the model on top of the convolutional base
model.add(top_model)

# set the first 25 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:25]:
    layer.trainable = False

# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
model.compile(loss='categorical_crossentropy',
          optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
          metrics=['accuracy'])

# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
    rescale=1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_height, img_width),
    batch_size=32,
    class_mode= 'categorical')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_height, img_width),
    batch_size=32,
    class_mode= 'categorical')

# fine-tune the model
model.fit_generator(
    train_generator,
    samples_per_epoch=nb_train_samples,
    nb_epoch=nb_epoch,
    validation_data=validation_generator,
    nb_val_samples=nb_validation_samples)

model.save_weights("fine-tune_weights.h5")
model.save("fine-tune_model.h5", True)
like image 554
PIZZA PIZZA Avatar asked Jan 24 '17 08:01

PIZZA PIZZA


People also ask

Is MobileNetV2 good for image classification?

ImageNet Classification. MobileNetV2 outperforms MobileNetV1 and ShuffleNet (1.5) with comparable model size and computational cost. With width multiplier of 1.4, MobileNetV2 (1.4) outperforms ShuffleNet (×2), and NASNet with faster inference time.

Can we use CNN for multiclass classification?

Also, the CNNs are immune to spatial variance and hence are able to detect features anywhere in the input images. This article will let the readers understand how CNNs work along with its Python implementation using Tensorflow and Keras libraries to solve a multiclass classification problem.

What is multi class image classification?

Multiclass image classification is a common task in computer vision, where we categorize an image into three or more classes. We have heard about classification and regression techniques in Machine Learning. We know that these two techniques work on different algorithms for discrete and continuous data respectively.


1 Answers

  1. Use softmax as activation function of the output layer, it is a generalization of the logistic function for a multi class case. Read more about it here.

  2. If validation error is much greater than the training one, as in your case, it is an indicator of overfitting. You should do some regularization, which is defined as any changes of the learning algorithm, that are intended to reduce the test error but not the training one. You can try things like data augmentation, early stopping, noise injection, more aggressive dropout, etc.

  3. If you have the same set-up as in the linked tutorial, change the class_modeof the train_generatorand validation_generator to categorical and it will one-hot encode your classes.

like image 125
Sergii Gryshkevych Avatar answered Oct 21 '22 15:10

Sergii Gryshkevych