I am trying to run an example of Resnet with cifar10 dataset using .flow_from_directory(directory)
. The below code is below:
from __future__ import print_function
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
from keras.callbacks import ReduceLROnPlateau, CSVLogger, EarlyStopping
import numpy as np
import resnet
import os
import cv2
import csv
#import keras
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
# input image dimensions
img_rows, img_cols = 32, 32
# The CIFAR10 images are RGB.
img_channels = 3
nb_classes = 10
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0,
zoom_range=0,
horizontal_flip=False,
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1) # randomly shift images vertically (fraction of total height))
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'/home/datasets/cifar10/train',
target_size=(32, 32),
batch_size=32,
shuffle=False)
validation_generator = test_datagen.flow_from_directory(
'/home/datasets/cifar10/test',
target_size=(32, 32),
batch_size=32,
shuffle=False)
model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=500,
epochs=50,
validation_data=validation_generator,
validation_steps=250)
However, I am obtaining the following accuracy value.
500/500 [==============================] - 22s - loss: 0.8139 - acc: 0.9254 - val_loss: 12.7198 - val_acc: 0.1250
Epoch 2/50
500/500 [==============================] - 19s - loss: 1.0645 - acc: 0.8856 - val_loss: 8.4179 - val_acc: 0.0560
Epoch 3/50
500/500 [==============================] - 19s - loss: 2.1014 - acc: 0.7492 - val_loss: 10.7770 - val_acc: 0.0956
Epoch 4/50
500/500 [==============================] - 19s - loss: 1.6806 - acc: 0.7772 - val_loss: 6.1023 - val_acc: 0.0741
Epoch 5/50
500/500 [==============================] - 19s - loss: 1.1798 - acc: 0.8669 - val_loss: 6.9016 - val_acc: 0.1253
Epoch 6/50
500/500 [==============================] - 19s - loss: 1.5448 - acc: 0.8369 - val_loss: 3.6371 - val_acc: 0.0370
Epoch 7/50
500/500 [==============================] - 19s - loss: 1.3763 - acc: 0.8599 - val_loss: 4.8012 - val_acc: 0.1204
Epoch 8/50
500/500 [==============================] - 19s - loss: 1.0186 - acc: 0.8891 - val_loss: 6.8395 - val_acc: 0.0912
Epoch 9/50
500/500 [==============================] - 19s - loss: 0.9477 - acc: 0.9081 - val_loss: 10.4287 - val_acc: 0.1253
Epoch 10/50
500/500 [==============================] - 19s - loss: 1.0689 - acc: 0.8686 - val_loss: 7.9931 - val_acc: 0.1253
I am using Resnet from this link. I tried numerous examples to sort the problem including the one on the official documentation. However, I am unable to resolve the problem. Training accuracy is changing however val accuracy is somewhat constatn. Can some one point the problem
The flow_from_directory() method takes a path of a directory and generates batches of augmented data. The directory structure is very important when you are using flow_from_directory() method. The flow_from_directory() assumes: The root directory contains at least two folders one for train and one for the test.
seed=42. ) directory: path where there exists a folder, under which all the test images are present. For example, in this case, the images are found in /test/test_images/ batch_size: Set this to some number that divides your total number of images in your test set exactly.
Class modes: "categorical" : 2D output (aka. list of numbers of length N), [0, 0, 1, 0], which is a one-hot encoding (only one number is 1/ "hot") representing the donkey. This is for mutually exclusive labels.
According the Keras documentation.
flow_from_directory(directory)
, Description:Takes the path to a directory, and generates batches of augmented/normalized data. Yields batches indefinitely, in an infinite loop.
With shuffle = False
, it takes the same batch indefinitely. leading to these accuracy values. I changed shuffle = True
and it works fine now.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With