Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Constant Validation Accuracy with a high loss in machine learning

I'm currently trying to do create an image classification model using Inception V3 with 2 classes. I have 1428 images which are balanced about 70/30. When I run my model I get a pretty high loss of as well as a constant validation accuracy. What might be causing this constant value?

data = np.array(data, dtype="float")/255.0
labels = np.array(labels,dtype ="uint8")

(trainX, testX, trainY, testY) = train_test_split(
                            data,labels, 
                            test_size=0.2, 
                            random_state=42) 

img_width, img_height = 320, 320 #InceptionV3 size

train_samples =  1145 
validation_samples = 287
epochs = 20

batch_size = 32

base_model = keras.applications.InceptionV3(
        weights ='imagenet',
        include_top=False, 
        input_shape = (img_width,img_height,3))

model_top = keras.models.Sequential()
model_top.add(keras.layers.GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)),
model_top.add(keras.layers.Dense(350,activation='relu'))
model_top.add(keras.layers.Dropout(0.2))
model_top.add(keras.layers.Dense(1,activation = 'sigmoid'))
model = keras.models.Model(inputs = base_model.input, outputs = model_top(base_model.output))


for layer in model.layers[:30]:
  layer.trainable = False

model.compile(optimizer = keras.optimizers.Adam(
                    lr=0.00001,
                    beta_1=0.9,
                    beta_2=0.999,
                    epsilon=1e-08),
                    loss='binary_crossentropy',
                    metrics=['accuracy'])

#Image Processing and Augmentation 
train_datagen = keras.preprocessing.image.ImageDataGenerator(
          zoom_range = 0.05,
          #width_shift_range = 0.05, 
          height_shift_range = 0.05,
          horizontal_flip = True,
          vertical_flip = True,
          fill_mode ='nearest') 

val_datagen = keras.preprocessing.image.ImageDataGenerator()


train_generator = train_datagen.flow(
        trainX, 
        trainY,
        batch_size=batch_size,
        shuffle=True)

validation_generator = val_datagen.flow(
                testX,
                testY,
                batch_size=batch_size)

history = model.fit_generator(
    train_generator, 
    steps_per_epoch = train_samples//batch_size,
    epochs = epochs, 
    validation_data = validation_generator, 
    validation_steps = validation_samples//batch_size,
    callbacks = [ModelCheckpoint])

This is my log when I run my model:

Epoch 1/20
35/35 [==============================]35/35[==============================] - 52s 1s/step - loss: 0.6347 - acc: 0.6830 - val_loss: 0.6237 - val_acc: 0.6875

Epoch 2/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6364 - acc: 0.6756 - val_loss: 0.6265 - val_acc: 0.6875

Epoch 3/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6420 - acc: 0.6743 - val_loss: 0.6254 - val_acc: 0.6875

Epoch 4/20
35/35 [==============================]35/35 [==============================] - 14s 414ms/step - loss: 0.6365 - acc: 0.6851 - val_loss: 0.6289 - val_acc: 0.6875

Epoch 5/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6359 - acc: 0.6727 - val_loss: 0.6244 - val_acc: 0.6875

Epoch 6/20
35/35 [==============================]35/35 [==============================] - 15s 415ms/step - loss: 0.6342 - acc: 0.6862 - val_loss: 0.6243 - val_acc: 0.6875
like image 792
student17 Avatar asked Aug 31 '25 17:08

student17


1 Answers

I think you have too low learning rate and too few epochs. try with lr = 0.001 and epochs = 100.

like image 184
Novak Avatar answered Sep 02 '25 06:09

Novak