Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Neural Network with tanh wrong saturation with normalized data

I'm using a neural network made of 4 input neurons, 1 hidden layer made of 20 neurons and a 7 neuron output layer.

I'm trying to train it for a bcd to 7 segment algorithm. My data is normalized 0 is -1 and 1 is 1.

When the output error evaluation happens, the neuron saturates wrong. If the desired output is 1 and the real output is -1, the error is 1-(-1)= 2.

When I multiply it by the derivative of the activation function error*(1-output)*(1+output), the error becomes almost 0 Because of 2*(1-(-1)*(1-1).

How can I avoid this saturation error?

like image 884
PVJ Avatar asked Nov 29 '12 19:11

PVJ


2 Answers

Saturation at the asymptotes of of the activation function is a common problem with neural networks. If you look at a graph of the function, it doesn't surprise: They are almost flat, meaning that the first derivative is (almost) 0. The network cannot learn any more.

A simple solution is to scale the activation function to avoid this problem. For example, with tanh() activation function (my favorite), it is recommended to use the following activation function when the desired output is in {-1, 1}:

f(x) = 1.7159 * tanh( 2/3 * x)  

Consequently, the derivative is

f'(x) = 1.14393 * (1- tanh( 2/3 * x))  

This will force the gradients into the most non-linear value range and speed up the learning. For all the details I recommend reading Yann LeCun's great paper Efficient Back-Prop. In the case of tanh() activation function, the error would be calculated as

error = 2/3 * (1.7159 - output^2) * (teacher - output)
like image 160
Domderon Avatar answered Sep 19 '22 13:09

Domderon


This is bound to happen no matter what function you use. The derivative, by definition, will be zero when the output reaches one of two extremes. It's been a while since I have worked with Artificial Neural Networks but if I remember correctly, this (among many other things) is one of the limitations of using the simple back-propagation algorithm.

You could add a Momentum factor to make sure there is some correction based off previous experience, even when the derivative is zero.

You could also train it by epoch, where you accumulate the delta values for the weights before doing the actual update (compared to updating it every iteration). This also mitigates conditions where the delta values are oscillating between two values.

There may be more advanced methods, like second order methods for back propagation, that will mitigate this particular problem.

However, keep in mind that tanh reaches -1 or +1 at the infinities and the problem is purely theoretical.

like image 21
Pavan Yalamanchili Avatar answered Sep 18 '22 13:09

Pavan Yalamanchili