I've just started using Keras. The sample I'm working on has a model and the following snippet is used to run the model
from sklearn.preprocessing import LabelBinarizer label_binarizer = LabelBinarizer() y_one_hot = label_binarizer.fit_transform(y_train) model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, nb_epoch=3, validation_split=0.2)
I get the following response:
Using TensorFlow backend. Train on 80 samples, validate on 20 samples Epoch 1/3 32/80 [===========>..................] - ETA: 0s - loss: 1.5831 - acc: 0.4062 80/80 [==============================] - 0s - loss: 1.3927 - acc: 0.4500 - val_loss: 0.7802 - val_acc: 0.8500 Epoch 2/3 32/80 [===========>..................] - ETA: 0s - loss: 0.9300 - acc: 0.7500 80/80 [==============================] - 0s - loss: 0.8490 - acc: 0.8000 - val_loss: 0.5772 - val_acc: 0.8500 Epoch 3/3 32/80 [===========>..................] - ETA: 0s - loss: 0.6397 - acc: 0.8750 64/80 [=======================>......] - ETA: 0s - loss: 0.6867 - acc: 0.7969 80/80 [==============================] - 0s - loss: 0.6638 - acc: 0.8000 - val_loss: 0.4294 - val_acc: 0.8500
The documentation says that fit returns
A History instance. Its history attribute contains all information collected during training.
Does anyone know how to interpret the history instance?
For example, what does 32/80 mean? I assume 80 is the number of samples but what is 32? ETA: 0s ??
By default Keras' model. fit() returns a History callback object. This object keeps track of the accuracy, loss and other training metrics, for each epoch, in the memory.
fit() is for training the model with the given inputs (and corresponding training labels). evaluate() is for evaluating the already trained model using the validation (or test) data and the corresponding labels. Returns the loss value and metrics values for the model.
According to Keras documentation, the model. fit method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics.
Keras can separate a portion of your training data into a validation dataset and evaluate the performance of your model on that validation dataset each epoch. You can do this by setting the validation_split argument on the fit() function to a percentage of the size of your training dataset.
ETA = Estimated Time of Arrival.
80
is the size of your training set, 32/80
and 64/80
mean that your batch size is 32 and currently the first batch (or the second batch respectively) is being processed.
loss
and acc
refer to the current loss and accuracy of the training set. At the end of each epoch your trained NN is evaluated against your validation set. This is what val_loss
and val_acc
refer to.
The history object returned by model.fit()
is a simple class with some fields, e.g. a reference to the model
, a params
dict and, most importantly, a history
dict. It stores the values of loss
and acc
(or any other used metric) at the end of each epoch. For 2 epochs it will look like this:
{ 'val_loss': [16.11809539794922, 14.12947562917035], 'val_acc': [0.0, 0.0], 'loss': [14.890108108520508, 12.088571548461914], 'acc': [0.0, 0.25] }
This comes in very handy if you want to visualize your training progress.
Note: if your validation loss/accuracy starts increasing while your training loss/accuracy is still decreasing, this is an indicator of overfitting.
Note 2: at the very end you should test your NN against some test set that is different from you training set and validation set and thus has never been touched during the training process.
32
is your batch size. 32 is the default value that you can change in your fit function if you wish to do so.
After the first batch is trained Keras estimates the training duration (ETA: estimated time of arrival) of one epoch which is equivalent to one round of training with all your samples.
In addition to that you get the losses (the difference between prediction and true labels) and your metric (in your case the accuracy) for both the training and the validation samples.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With