When the training starts, in the run window only loss and acc are displayed, the val_loss and val_acc are missing. Only at the end, these values are showed.
model.add(Flatten())
model.add(Dense(512, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(10, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer="adam",
metrics=['accuracy']
)
model.fit(
x_train,
y_train,
batch_size=32,
epochs=1,
validation_data=(x_test, y_test),
shuffle=True
)
this is how the training starts:
Train on 50000 samples, validate on 10000 samples
Epoch 1/1
32/50000 [..............................] - ETA: 34:53 - loss: 2.3528 - acc: 0.0938
64/50000 [..............................] - ETA: 18:56 - loss: 2.3131 - acc: 0.0938
96/50000 [..............................] - ETA: 13:45 - loss: 2.3398 - acc: 0.1146
and this is when it finishes
49984/50000 [============================>.] - ETA: 0s - loss: 1.5317 - acc: 0.4377
50000/50000 [==============================] - 231s 5ms/step - loss: 1.5317 - acc: 0.4378 - val_loss: 1.1503 - val_acc: 0.5951
I want to see the val_acc and val_loss in each line
Usually with every epoch increasing, loss should be going lower and accuracy should be going higher. But with val_loss(keras validation loss) and val_acc(keras validation accuracy), many cases can be possible like below: val_loss starts increasing, val_acc starts decreasing.
val_loss is the value of cost function for your cross-validation data and loss is the value of cost function for your training data.
'val_acc' refers to validation set. Note that val_acc refers to a set of samples that was not shown to the network during training and hence refers to how much your model works in general for cases outside the training set. It is common for validation accuracy to be lower than accuracy.
Validation loss and accuracy are computed on epoch end, not on batch end. If you want to compute those values after each batch, you would have to implement your own callback with an on_batch_end()
method and call self.model.evaluate()
on the validation set. See https://keras.io/callbacks/.
But computing the validation loss and accuracy after each epoch is going to slow down your training a lot and doesn't bring much in terms of evaluation of the network performance.
It doesn't make much sense to compute the validation metrics at each iteration, because it would make your training process much slower and your model doesn't change that much from iteration to iteration. On the other hand it makes much more sense to compute these metrics at the end of each epoch.
In your case you have 50000 samples on the training set and 10000 samples on the validation set and a batch size of 32. If you were to compute the val_loss
and val_acc
after each iteration it would mean that for every 32 training samples updating your weights you would have 313 (i.e. 10000/32) iterations of 32 validation samples. Since your every epoch consists of 1563 iterations (i.e. 50000/32), you'd have to perform 489219 (i.e. 313*1563) batch predictions just for evaluating the model. This would cause your model to train several orders of magnitude slower!
If you still want to compute the validation metrics at the end of each iteration (not recommended for the reasons stated above), you could simply shorten your "epoch" so that your model sees just 1 batch per epoch:
model.fit(
x_train,
y_train,
batch_size=32,
epochs=len(x_train) // batch_size + 1, # 1563 in your case
steps_per_epoch=1,
validation_data=(x_test, y_test),
shuffle=True
)
This isn't exactly equivalent because the samples will be drawn at random, with replacement, from the data but it is the easiest you can get...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With