Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Approximating sine function with Neural Network and ReLU

I am trying to approximate a sine function with a neural network (Keras).

Yes, I read the related posts :)

  • Link 1

  • Link 2

  • Link 3

Using four hidden neurons with sigmoid and an output layer with linear activation works fine.

But there are also settings that provide results that seem strange to me.

Since I am just started to work with I am interested in what and why things happen, but I could not figure that out so far.

# -*- coding: utf-8 -*-

import numpy as np
np.random.seed(7)

from keras.models import Sequential
from keras.layers import Dense
import pylab as pl
from sklearn.preprocessing import MinMaxScaler

X = np.linspace(0.0 , 2.0 * np.pi, 10000).reshape(-1, 1)
Y = np.sin(X)

x_scaler = MinMaxScaler()
#y_scaler = MinMaxScaler(feature_range=(-1.0, 1.0))
y_scaler = MinMaxScaler()

X = x_scaler.fit_transform(X)
Y = y_scaler.fit_transform(Y)

model = Sequential()
model.add(Dense(4, input_dim=X.shape[1], kernel_initializer='uniform', activation='relu'))
# model.add(Dense(4, input_dim=X.shape[1], kernel_initializer='uniform', activation='sigmoid'))
# model.add(Dense(4, input_dim=X.shape[1], kernel_initializer='uniform', activation='tanh'))
model.add(Dense(1, kernel_initializer='uniform', activation='linear'))

model.compile(loss='mse', optimizer='adam', metrics=['mae'])

model.fit(X, Y, epochs=500, batch_size=32, verbose=2)

res = model.predict(X, batch_size=32)

res_rscl = y_scaler.inverse_transform(res)

Y_rscl = y_scaler.inverse_transform(Y)

pl.subplot(211)
pl.plot(res_rscl, label='ann')
pl.plot(Y_rscl, label='train')
pl.xlabel('#')
pl.ylabel('value [arb.]')
pl.legend()
pl.subplot(212)
pl.plot(Y_rscl - res_rscl, label='diff')
pl.legend()
pl.show()

This is the result for four hidden neurons (ReLU) and linear output activation. 4 hidden neurons (ReLU), output activation: linear

Why does the result take the shape of the ReLU?

Does this have something to do with the output normalization?

like image 946
Joe Avatar asked Jun 23 '17 08:06

Joe


People also ask

How ReLU can be used with neural networks?

One way ReLUs improve neural networks is by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of x). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations.

Can ReLU approximate any function?

We have proved that a sufficiently large neural network using the ReLU activation function can approximate any function in L^1 up to any arbitrary precision.

How do you approximate the sine function?

f(θ) = ap(θ) 1 + b p(θ) , 0 ≤ θ ≤ 180. 8100 = 4θ(180 − θ) 40500 − θ(180 − θ) . This gives Bhaskara's approximation formula for the sine function. Bhaskara's Approximation Formula: sin(θ◦) ≈ 4θ(180 − θ) 40500 − θ(180 − θ) , for 0 ≤ θ ≤ 180.

How do you approximate a function in a neural network?

The key to neural networks' ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer.


2 Answers

Two things here:

  1. Your network is really shallow and small. Having only 4 neurons with relu makes a case when a couple of this neurons are completely saturated highly possible. This is probably why your network result looks like that. Try he_normal or he_uniform as initializer to overcome that.
  2. In my opinion your network is too small for this task. I would definitely increase both depth and width of your network by intdoucing more neurons and layers to your network. In case of sigmoid which has a similiar shape to a sin function this might work fine - but in case of relu you really need a bigger network.
like image 132
Marcin Możejko Avatar answered Sep 28 '22 01:09

Marcin Możejko


Try adding more hidden layers, each with more hidden units. I used this code:

model = Sequential()
model.add(Dense(50, input_dim=X.shape[1], activation='relu'))
model.add(Dense(50, input_dim=X.shape[1], activation='relu'))
model.add(Dense(1, activation='linear'))

and got these results:

enter image description here

like image 30
stackoverflowuser2010 Avatar answered Sep 28 '22 00:09

stackoverflowuser2010