Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Noisy training loss

I am training encoder-decoder attention-based model, with batch size 8. I don't suspect too much noise in the dataset, however the examples come from a few different distributions.

I can see a lot of noise in the train loss curve. After averaging (.99), the tendency is fine. Also the accuracy of the model is not bad.

I'd like to understand what could be the reason of such shape of loss curve

noisy train loss averaged train loss

like image 942
DavidS1992 Avatar asked Feb 02 '18 09:02

DavidS1992


3 Answers

I found the answer myself.

I think other answers are not correct, because they are based on a experience with a simpler models/architectures. The main point that was bothering me was the fact that noise in losses usually is more symmetrical (you can plot the average and the noise is randomly over and below the average). Here, we see more like low-tendency path and sudden peaks.

As I wrote, the architecture I'm using is encoder-decoder with attention. It can easily concluded that inputs and outputs can have different lengths. The loss is summed over all time-steps, and DOESN'T need to be divided by the number of time-steps.

https://www.tensorflow.org/tutorials/seq2seq

Important note: It's worth pointing out that we divide the loss by batch_size, so our hyperparameters are "invariant" to batch_size. Some people divide the loss by (batch_size * num_time_steps), which plays down the errors made on short sentences. More subtly, our hyperparameters (applied to the former way) can't be used for the latter way. For example, if both approaches use SGD with a learning of 1.0, the latter approach effectively uses a much smaller learning rate of 1 / num_time_steps.

I was not averaging the loss, that's why the noise is observable.

P.S. Similarly the batch size of for example 8 can have a few hundred of inputs and targets so in fact you can't say that it is small or big not knowing the mean length of example.

like image 174
DavidS1992 Avatar answered Dec 18 '22 20:12

DavidS1992


You are using mini-batch gradient descent, which computes the gradient of the loss function with respect to only the examples in the mini-batch. However, the loss you are measuring is over all training examples. Overall loss should have a downward trend, but it will often go in the wrong direction because your mini-batch gradient was not an accurate enough estimate of total loss.

Furthermore, you are multiplying the gradient by the learning rate at each step to try and descend the loss function. This is a local approximation and can often overshoot the target minimum and end up at a higher point on the loss surface, especially if your learning rate is high.

enter image description here

Image Source

Think of this image as the loss funciton for a model with only one parameter. We take the gradient at point, multiply by the learning rate to project a line segment in the direction of the gradient (not pictured). We then take the x-value at the end of this line segment as our updated parameter, and finally we compute the loss at this new parameter setting.

If our learning rate was too high, then we will have overshot the minimum that the gradient was pointing towards and possibly ended up at a higher loss, as pictured.

like image 32
Imran Avatar answered Dec 18 '22 20:12

Imran


Noisy training loss but good accuracy can be due to this reason:

Local minima:

The function can have local minimas, So everytime your gradient descent converges towards the local minimum, the lost/cost decreases. But with good learning rate, the model learns to jump from these points and the gradient descent will converge towards the global minimum which is the solution. So that's why the training loss is very noisy.

curve

like image 32
janu777 Avatar answered Dec 18 '22 18:12

janu777