Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras log_loss error is same

I am new to keras and deep learnin.When i crate a sample basic model,i fit it and my model's log loss is same always.

model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', init='he_normal',
                        input_shape=(color_type, img_rows, img_cols)))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th"))
model.add(Dropout(0.5))
model.add(Convolution2D(64, 3, 3, border_mode='same', init='he_normal'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th")) #this part is wrong
model.add(Dropout(0.5))

model.add(Convolution2D(128, 3, 3, border_mode='same', init='he_normal'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th"))
model.add(Dropout(0.5))

model.add(Flatten())
model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(Adam(lr=1e-3), loss='categorical_crossentropy')


model.fit(x_train, y_train, batch_size=64, nb_epoch=200,
               verbose=1, validation_data=(x_valid,y_valid))

Train on 17939 samples, validate on 4485 samples

Epoch 1/200 17939/17939 [==============================] - 8s - loss: 99.8137 - acc: 0.3096 - val_loss: 99.9626 - val_acc: 0.0000e+00

Epoch 2/200 17939/17939 [==============================] - 8s - loss: 99.8135 - acc: 0.2864 - val_loss: 99.9626 - val_acc: 0.0000e+00

Epoch 3/200 17939/17939 [==============================] - 8s - loss: 99.8135 - acc: 0.3120 - val_loss: 99.9626 - val_acc: 1.0000

Epoch 4/200 17939/17939 [==============================] - 10s - loss: 99.8135 - acc: 0.3315 - val_loss: 99.9626 - val_acc: 1.0000

Epoch 5/200 17939/17939 [==============================] - 10s - loss: 99.8138 - acc: 0.3435 - val_loss: 99.9626 - val_acc: 0.4620

..

...

it's going like this

Do you know whicc part i made wrong ?

like image 440
Tomas Ukasta Avatar asked Aug 11 '17 15:08

Tomas Ukasta


1 Answers

One reason for such behavior might be a too small learning rate. Try to increase your learning rate by using Adam(lr=1e-2) or Adam(lr=1e-1). Also, wait couple of more iterations (epochs) and see whether it improves. If not, you may try to decrease the dropout. In addition, I would suggest to normalize your input data if you haven't done it yet.

like image 156
Miriam Farber Avatar answered Nov 19 '22 11:11

Miriam Farber