Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Neural network toy model to fit sine function fails, what's wrong?

Graduate student, new to Keras and neural networks was trying to fit a very simple feedforward neural network to a one-dimensional sine.

Below are three examples of the best fit that I can get. On the plots, you can see the output of the network vs ground truth

Neural network output vs Ground truth (run 3)

Neural network output vs Ground truth (run 1)
Neural network output vs Ground truth (run 2)

The complete code, just a few lines, is posted here example Keras


I was playing with the number of layers, different activation functions, different initializations, and different loss functions, batch size, number of training samples. It seems that none of those were able to improve the results beyond the above examples.

I would appreciate any comments and suggestions. Is sine a hard function for a neural network to fit? I suspect that the answer is not, so I must be doing something wrong...


There is a similar question here from 5 years ago, but the OP there didn't provide the code and it is still not clear what went wrong or how he was able to resolve this problem.

like image 447
them Avatar asked Dec 07 '25 06:12

them


1 Answers

In order to make your code work, you need to:

  • scale the input values in the [-1, +1] range (neural networks don't like big values)
  • scale the output values as well, as the tanh activation doesn't work too well close to +/-1
  • use the relu activation instead of tanh in all but the last layer (converges way faster)

With these modifications, I was able to run your code with two hidden layers of 10 and 25 neurons

like image 110
BlackBear Avatar answered Dec 09 '25 21:12

BlackBear