Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to save val_loss and val_acc in Keras

Tags:

python

keras

I have trouble with recording 'val_loss' and 'val_acc' in Keras. 'loss' and 'acc' are easy because they always recorded in history of model.fit.

'val_loss' is recorded if validation is enabled in fit, and val_acc is recorded if validation and accuracy monitoring are enabled. But what does this mean?

My node is model.fit(train_data, train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[history]).

As you see, I use 5-fold cross-validation and shuffle the data. In this case, how can I enable validation in fit to record 'val_loss' and 'val_acc'?

Thanks

like image 852
Rocco Avatar asked Nov 19 '17 21:11

Rocco


People also ask

What is Val_loss and Val_acc in keras?

Usually with every epoch increasing, loss should be going lower and accuracy should be going higher. But with val_loss(keras validation loss) and val_acc(keras validation accuracy), many cases can be possible like below: val_loss starts increasing, val_acc starts decreasing.

What is the difference between loss and Val_loss?

val_loss is the value of cost function for your cross-validation data and loss is the value of cost function for your training data.

What is the difference between accuracy and Val_accuracy?

Context in source publicationval_accuracy indicates the accuracy of the predictions of a randomly separated validation set after each training period. However, the general accuracy continued to increase (see Figure 2).

How can VAL loss be reduced?

Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)


1 Answers

From Keras documentation, we have for models.fit method:

fit(x=None, y=None, 
    batch_size=None, 
    epochs=1, 
    verbose=1, 
    callbacks=None, 
    validation_split=0.0, validation_data=None, 
    shuffle=True, 
    class_weight=None, 
    sample_weight=None, 
    initial_epoch=0, 
    steps_per_epoch=None, 
    validation_steps=None
)

'val_loss' is recorded if validation is enabled in fit, and val_accis recorded if validation and accuracy monitoring are enabled. - This is from the keras.callbacks.Callback() object, if used for callbacks parameter in the above fit method.

Instead of using the history callback, which you've used, it can be used as follows:

    from keras.callbacks import Callback
    logs = Callback()
    model.fit(train_data, 
                train_labels,
                epochs = 64, 
                batch_size = 10,
                shuffle = True,
                validation_split = 0.2, 
                callbacks=[logs]
           ) 

'val_loss' is recorded if validation is enabled in fit means: when using the model.fit method you are using either the validatoin_split parameter or you use validation_data parameter to specify the tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch. .

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable). - Keras Documentation ( Return value for model.fit method)

You are using the History callback, in your model as follows:

model.fit(train_data, 
            train_labels,
            epochs = 64,
            batch_size = 10,
            shuffle = True,
            validation_split = 0.2, 
            callbacks=[history]
       )

history.history will output a dictionary for you with the : loss, acc, val_loss and val_acc, if you use a variable for saving model.fit like below:

history = model.fit(
     train_data, 
     train_labels,
     epochs = 64,
     batch_size = 10,
     shuffle = True,
     validation_split = 0.2, 
     callbacks=[history]
)
history.history

The output will be like the following:

{'val_loss': [14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849,
              14.431451635814849],
 'val_acc':  [0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403,
              0.1046428571712403],
 'loss': [14.555215610322499,
          14.555215534028553,
          14.555215548560733,
          14.555215588524229,
          14.555215592157273,
          14.555215581258137,
          14.555215575808571,
          14.55521561940511,
          14.555215563092913,
          14.555215624854679],
 'acc': [0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571,
         0.09696428571428571]}

You can save the data both by using csvlogger like below as given in the comments or by using the longer method of writing a dictionary to a csv file as given here writing a dictionary to a csv

csv_logger = CSVLogger('training.log')
model.fit(X_train, Y_train, callbacks=[csv_logger])
like image 56
aspiring1 Avatar answered Oct 20 '22 00:10

aspiring1