Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Keras LSTM learning converges too fast on high loss

This is more of a deep learning conceptual problem, and if this is not the right platform I'll take it elsewhere.

I'm trying to use a Keras LSTM sequential model to learn sequences of text and map them to a numeric value (a regression problem).

The thing is, the learning always converges too fast on high loss (both training and testing). I've tried all possible hyperparameters, and I have a feeling it's a local minima issue that causes the model's high bias.

My questions are basically :

  1. How to initialize weights and bias given this problem?
  2. Which optimizer to use?
  3. How deep I should extend the network (I'm afraid that if I use a very deep network, the training time will be unbearable and the model variance will grow)
  4. Should I add more training data?

Input and output are normalized with minmax.

I am using SGD with momentum, currently 3 LSTM layers (126,256,128) and 2 dense layers (200 and 1 output neuron)

I have printed the weights after few epochs and noticed that many weights are zero and the rest are basically have the value of 1 (or very close to it).

Here are some plots from tensorboard :enter image description here

like image 714
NRG Avatar asked Sep 14 '17 16:09

NRG


People also ask

How can keras reduce learning rate?

A typical way is to to drop the learning rate by half every 10 epochs. To implement this in Keras, we can define a step decay function and use LearningRateScheduler callback to take the step decay function as argument and return the updated learning rates for use in SGD optimizer.

What is loss value in Lstm?

The Mean Squared Error, or MSE, loss is the default loss to use for regression problems. Mean squared error is calculated as the average of the squared differences between the predicted and actual values. The result is always positive regardless of the sign of the predicted and actual values and a perfect value is 0.0.


2 Answers

Faster convergence with a very high loss could possibly mean you are facing an exploding gradients problem. Try to use a much lower learning rate like 1e-5 or 1e-6. You can also try techniques like gradient clipping to limit your gradients in case of high learning rates.

Answer 1

Another reason could be initialization of weights, try the below 3 methods:

  1. Method described in this paper https://arxiv.org/abs/1502.01852
  2. Xavier initialization
  3. Random initialization

For many cases 1st initialization method works the best.

Answer 2

You can try different optimizers like

  1. Momentum optimizer
  2. SGD or Gradient descent
  3. Adam optimizer

The choice of your optimizer should be based on the choice of your loss function. For example: for a logistic regression problem with MSE as a loss function, gradient based optimizers will not converge.

Answer 3

How deep or wide your network should be is again fully dependent on which type of network you are using and what the problem is.

As you said you are using a sequential model using LSTM, to learn sequence on text. No doubt your choice of model is good for this problem you can also try 4-5 LSTMs.

Answer 4

If your gradients are going either 0 or infinite, it is called vanishing gradients or it simply means early convergence, try gradient clipping with proper learning rate and the first weight initialization technique.

I am sure this will definitely solve your problem.

like image 68
Avinash Rai Avatar answered Oct 12 '22 19:10

Avinash Rai


Consider reducing your batch_size. With large batch_size, it could be that your gradient at some point couldn't find any more variation in your data's stochasticity and for that reason it convergences earlier.

like image 33
Aziz Avatar answered Oct 12 '22 18:10

Aziz