Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between the terms accuracy and validation accuracy

Tags:

I have used LSTM from Keras to build a model that can detect if two questions on Stack overflow are duplicate or not. When I run the model I see the following output in the epochs.

Epoch 23/200 727722/727722 [==============================] - 67s - loss: 0.3167 - acc: 0.8557 - val_loss: 0.3473 - val_acc: 0.8418 Epoch 24/200 727722/727722 [==============================] - 67s - loss: 0.3152 - acc: 0.8573 - val_loss: 0.3497 - val_acc: 0.8404 Epoch 25/200 727722/727722 [==============================] - 67s - loss: 0.3136 - acc: 0.8581 - val_loss: 0.3518 - val_acc: 0.8391 

I am trying to understand the meaning of each of these terms. Which of the above values is the accuracy of my model. I am comparatively new to machine learning, so any explanation would help.

like image 731
Dookoto_Sea Avatar asked Jul 15 '18 02:07

Dookoto_Sea


People also ask

What should be the difference between training accuracy and validation accuracy?

The training accuracy comes out to be 95% whereas the validation accuracy is 62%. Sounds familiar? The portion marked in blue in the above image is the overfitting model since training error is very less and the test error is very high.

What is the difference between validation Loss and Validation accuracy?

It is the sum of errors made for each example in training or validation sets. Loss value implies how poorly or well a model behaves after each iteration of optimization. An accuracy metric is used to measure the algorithm's performance in an interpretable way.

What is difference between testing and validation?

What is this? One point of confusion for students is the difference between the validation set and the test set. In simple terms, the validation set is used to optimize the model parameters while the test set is used to provide an unbiased estimate of the final model.

What is the difference between loss and validation loss?

One of the most widely used metrics combinations is training loss + validation loss over time. The training loss indicates how well the model is fitting the training data, while the validation loss indicates how well the model fits new data.


1 Answers

When training a machine learning model, one of the main things that you want to avoid would be overfitting. This is when your model fits the training data well, but it isn't able to generalize and make accurate predictions for data it hasn't seen before.

To find out if their model is overfitting, data scientists use a technique called cross-validation, where they split their data into two parts - the training set, and the validation set. The training set is used to train the model, while the validation set is only used to evaluate the model's performance.

Metrics on the training set let you see how your model is progressing in terms of its training, but it's metrics on the validation set that let you get a measure of the quality of your model - how well it's able to make new predictions based on data it hasn't seen before.

With this in mind, loss and acc are measures of loss and accuracy on the training set, while val_loss and val_acc are measures of loss and accuracy on the validation set.

At the moment your model has an accuracy of ~86% on the training set and ~84% on the validation set. This means that you can expect your model to perform with ~84% accuracy on new data.

I notice that as your epochs goes from 23 to 25, your acc metric increases, while your val_acc metric decreases. This means that your model is fitting the training set better, but is losing its ability to predict on new data, indicating that your model is starting to fit on noise and is beginning to overfit.

So that is a quick explanation on validation metrics and how to interpret them.

like image 144
Primusa Avatar answered Oct 15 '22 23:10

Primusa