A professor of mine gave me a small script that he uses to visualize the evolution of his neural net after every epoch of learning. This is a plot of 3 values: train loss, train error and test error.
What is the difference between the first two?
Train Loss: Value of the objective function you are minimizing. This value could be a positive or negative number, depending on the specific objective function.
Train Error: Human interpretable metric of your model's performance. Usually means what % of training examples the model got incorrect. This is always a value between 0 and 1.
To understand the difference between error and loss you need to understand how your neural network is learning. In order to be able to learn there must exist a continous differentiable loss function in order to use back propagation algorithm. So the loss value is a value of this function. Sometimes this loss is exactly what you want to minimize (e.g. the distance between your model and true function in a regression case) but sometimes your error measure is not continous or it is impossible to differentiate it and then you have to introduce yet another different loss function. A good example of this is a binary classification task where accuracy error is not differentiable. You usually use a crossentropy or Hinge loss in order to improve accuracy. In this case your error will be 1 - accuracy
and loss will be a value of e.g. crossentropy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With