I m trying to build my own implementation of neural network back propagation algorithm. The code i have written for training is this so far,
def train(x,labels,n):
lam = 0.5
w1 = np.random.uniform(0,0.01,(20,120)) #weights
w2 = np.random.uniform(0,0.01,20)
for i in xrange(n):
w1 = w1/np.linalg.norm(w1)
w2 = w2/np.linalg.norm(w2)
for j in xrange(x.shape[0]):
y1 = np.zeros((600)) #output
d1 = np.zeros((20))
p = np.mat(x[j,:])
a = np.dot(w1,p.T) #activation
z = 1/(1 + np.exp((-1)*a))
y1[j] = np.dot(w2,z)
for k in xrange(20):
d1[k] = z[k]*(1 - z[k])*(y1[j] - labels[j])*np.sum(w2) #delta update rule
w1[k,:] = w1[k,:] - lam*d1[k]*x[j,:] #weight update
w2[k] = w2[k] - lam*(y1[j]-labels[j])*z[k]
E = 1/2*pow((y1[j]-labels[j]),2) #mean squared error
print E
return 0
No of input units- 120, No of hidden units- 20, No of output units- 1, No of training samples- 600
x is a 600*120 training set, with zero mean and unit variance, with max value 3.28 and min value -4.07. The first 200 samples belong to class 1, the second 200 to class 2 and last 200 to class 3. Labels are the class labels assigned to each sample, n is the number of iterations required for convergence. Each sample has 120 features.
I have initialized the weights between 0 and 0.01 and the input data is scaled to have unit variance and zero mean and still the code throws a Overflow warning, resulting in 'a' i.e. activation values being NaN. I cant understand what seems to be the problem.
Every sample has 120 elements. A sample row of x :
[ 0.80145231 1.29567936 0.91474224 1.37541992 1.16183938 1.43947296
1.32440357 1.43449479 1.32742415 1.40533852 1.28817561 1.37977183
1.2290933 1.34720161 1.15877069 1.29699635 1.05428735 1.21923531
0.92312685 1.1061345 0.66647463 1.00044203 0.34270708 1.05589558
0.28770958 1.21639524 0.31522575 1.32862243 0.42135899 1.3997094
0.5780146 1.44444501 0.75872771 1.47334256 0.95372771 1.48878048
1.13968139 1.49119962 1.33121905 1.47326017 1.47548571 1.4450047
1.58272343 1.39327328 1.62929132 1.31126604 1.62705274 1.21790335
1.59951034 1.12756958 1.56253815 1.04096709 1.52651382 0.95942134
1.48875633 0.87746762 1.45248623 0.78782313 1.40446404 0.68370011
The logistic sigmoid function is proned to overflow in NumPy as the signal strength increase. Try to append the following code line:
np.clip( signal, -500, 500 )
This will limit the values in NumPy matrises to be within the given interval. This will in turn prevent the precision overflow in the sigmoid function.
>>> arr
array([[-900, -600, -300],
[ 0, 300, 600]])
>>> np.clip( arr, -500, 500)
array([[-500, -500, -300],
[ 0, 300, 500]])
This is the snippet I'm using in my projects:
def sigmoid_function( signal ):
# Prevent overflow.
signal = np.clip( signal, -500, 500 )
# Calculate activation signal
signal = 1.0/( 1 + np.exp( -signal ))
return signal
#end
As the training progress, the network improves its precision. As this precision approaches perfection, the sigmoid signal will either approach 1 from below or 0 for above. Eg: either 0.99999999999... or 0.00000000000000001...
Since NumPy is focused on performing highly accurate numerical operations, it will try to maintain the highest possible precision and thus cause an overflow error. Note: This error message could be ignored by setting:
np.seterr( over='ignore' )
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With