Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Gradient checking in backpropagation

I'm trying to implement gradient checking for a simple feedforward neural network with 2 unit input layer, 2 unit hidden layer and 1 unit output layer. What I do is the following:

  1. Take each weight w of the network weights between all layers and perform forward propagation using w + EPSILON and then w - EPSILON.
  2. Compute the numerical gradient using the results of the two feedforward propagations.

What I don't understand is how exactly to perform the backpropagation. Normally, I compare the output of the network to the target data (in case of classification) and then backpropagate the error derivative across the network. However, I think in this case some other value have to be backpropagated, since in the results of the numerical gradient computation are not dependent of the target data (but only of the input), while the error backpropagation depends on the target data. So, what is the value that should be used in the backpropagation part of gradient check?

like image 534
bdfgegtertasdg Avatar asked Oct 04 '14 11:10

bdfgegtertasdg


People also ask

What is gradient checking in neural networks?

What is Gradient Checking? We describe a method for numerically checking the derivatives computed by your code to make sure that your implementation is correct. Carrying out the derivative checking procedure significantly increase your confidence in the correctness of your code.

What is gradient check in PyBrain?

PyBrain already includes a function that does just that, gradientCheck(). You can pass it to any network containing a structural component that you have programmed. It will check if the numeric gradients are (roughly) equal to the gradient specified by the _backwardImplementation() methods.

What is gradient in backpropagation algorithm?

The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic ...

How is gradient descent used in backpropagation?

The Stochastic Gradient Descent algorithm requires gradients to be calculated for each variable in the model so that new values for the variables can be calculated. Back-propagation is an automatic differentiation algorithm that can be used to calculate the gradients for the parameters in neural networks.


1 Answers

Backpropagation is performed after computing the gradients analytically and then using those formulas while training. A neural network is essentially a multivariate function, where the coefficients or the parameters of the functions needs to be found or trained.

The definition of a gradient with respect to a specific variable is the rate of change of the function value. Therefore, as you mentioned, and from the definition of the first derivative we can approximate the gradient of a function, including a neural network.

To check if your analytical gradient for your neural network is correct or not, it is good to check it using the numerical method.

For each weight layer w_l from all layers W = [w_0, w_1, ..., w_l, ..., w_k]
    For i in 0 to number of rows in w_l
        For j in 0 to number of columns in w_l
            w_l_minus = w_l; # Copy all the weights
            w_l_minus[i,j] = w_l_minus[i,j] - eps; # Change only this parameter

            w_l_plus = w_l; # Copy all the weights
            w_l_plus[i,j] = w_l_plus[i,j] + eps; # Change only this parameter

            cost_minus = cost of neural net by replacing w_l by w_l_minus
            cost_plus = cost of neural net by replacing w_l by w_l_plus

            w_l_grad[i,j] = (cost_plus - cost_minus)/(2*eps)

This process changes only one parameter at a time and computes the numerical gradient. In this case I have used the (f(x+h) - f(x-h))/2h, which seems to work better for me.

Note that, you mentiond: "since in the results of the numerical gradient computation are not dependent of the target data", this is not true. As when you find the cost_minus and cost_plus above, the cost is being computed on the basis of

  1. The weights
  2. The target classes

Therefore, the process of backpropagation should be independent of the gradient checking. Compute the numerical gradients before backpropagation update. Compute the gradients using backpropagation in one epoch (using something similar to above). Then compare each gradient component of the vectors/matrices and check if they are close enough.

like image 188
phoxis Avatar answered Oct 12 '22 00:10

phoxis