Update: a better formulation of the issue.
I'm trying to understand the backpropagation algorithm with an XOR neural network as an example. For this case there are 2 input neurons + 1 bias, 2 neurons in the hidden layer + 1 bias, and 1 output neuron.
A B A XOR B 1 1 -1 1 -1 1 -1 1 1 -1 -1 -1
(source: wikimedia.org)
I'm using stochastic backpropagation.
After reading a bit more I have found out that the error of the output unit is propagated to the hidden layers... initially this was confusing, because when you get to the input layer of the neural network, then each neuron gets an error adjustment from both of the neurons in the hidden layer. In particular, the way the error is distributed is difficult to grasp at first.
Step 1 calculate the output for each instance of input.
Step 2 calculate the error between the output neuron(s) (in our case there is only one) and the target value(s):
Step 3 we use the error from Step 2 to calculate the error for each hidden unit h:
The 'weight kh' is the weight between the hidden unit h and the output unit k, well this is confusing because the input unit does not have a direct weight associated with the output unit. After staring at the formula for a few hours I started to think about what the summation means, and I'm starting to come to the conclusion that each input neuron's weight that connects to the hidden layer neurons is multiplied by the output error and summed up. This is a logical conclusion, but the formula seems a little confusing since it clearly says the 'weight kh' (between the output layer k and hidden layer h).
Am I understanding everything correctly here? Can anybody confirm this?
What's O(h) of the input layer? My understanding is that each input node has two outputs: one that goes into the the first node of the hidden layer and one that goes into the second node hidden layer. Which of the two outputs should be plugged into the O(h)*(1 - O(h))
part of the formula?
Back-propagation is just a way of propagating the total loss back into the neural network to know how much of the loss every node is responsible for, and subsequently updating the weights in such a way that minimizes the loss by giving the nodes with higher error rates lower weights and vice versa.
Backpropagation is an algorithm that back propagates the errors from output nodes to the input nodes. Therefore, it is simply referred to as backward propagation of errors. It uses in the vast applications of neural networks in data mining like Character recognition, Signature verification, etc.
Backpropagation is the process of tuning a neural network's weights to better the prediction accuracy. There are two directions in which information flows in a neural network. Forward propagation — also called inference — is when data goes into the neural network and out pops a prediction.
The tutorial you posted here is actually doing it wrong. I double checked it against Bishop's two standard books and two of my working implementations. I will point out below where exactly.
An important thing to keep in mind is that you are always searching for derivatives of the error function with respect to a unit or weight. The former are the deltas, the latter is what you use to update your weights.
If you want to understand backpropagation, you have to understand the chain rule. It's all about the chain rule here. If you don't know how it works exactly, check up at wikipedia - it's not that hard. But as soon as you understand the derivations, everything falls into place. Promise! :)
∂E/∂W can be composed into ∂E/∂o ∂o/∂W via the chain rule. ∂o/∂W is easily calculated, since it's just the derivative of the activation/output of a unit with respect to the weights. ∂E/∂o is actually what we call the deltas. (I am assuming that E, o and W are vectors/matrices here)
We do have them for the output units, since that is where we can calculate the error. (Mostly we have an error function that comes down to delta of (t_k - o_k), eg for quadratic error function in the case of linear outputs and cross entropy in case for logistic outputs.)
The question now is, how do we get the derivatives for the internal units? Well, we know that the output of a unit is the sum of all incoming units weighted by their weights and the application of a transfer function afterwards. So o_k = f(sum(w_kj * o_j, for all j)).
So what we do is, derive o_k with respect to o_j. Since delta_j = ∂E/∂o_j = ∂E/∂o_k ∂o_k/∂o_j = delta_k ∂o_k/o_j. So given delta_k, we can calculate delta_j!
Let's do this. o_k = f(sum(w_kj * o_j, for all j)) => ∂o_k/∂o_j = f'(sum(w_kj * o_j, for all j)) * w_kj = f'(z_k) * w_kj.
For the case of the sigmoidal transfer function, this becomes z_k(1 - z_k) * w_kj. (Here is the error in the tutorial, the author says o_k(1 - o_k) * w_kj!)
I'm not sure what your question is but I actually went through that tutorial myself and I can assure you, other than a one obvious typo, there is nothing incorrect about it.
I will make the assumption that your question is because you are confused about how the backpropagation hidden delta is derived. If this is indeed your question then please consider
(source: pandamatak.com)
You are probably confused as to how the author derived this equation. This is actually a straightforward application of the multivariate chain rule. Namely, (what follows is taken from wikipedia)
"Suppose that each argument of z = f(u, v) is a two-variable function such that u = h(x, y) and v = g(x, y), and that these functions are all differentiable. Then the chain rule would look like:
"
Now imagine extending the chain rule by an induction argument to
E(z'1,z'2,..,z'n) where z'k is the output of the kth output layer pre-activation, and z'k(wji) that is to say that E is a function of the z' and z' itself is a function of wji (if this doesn't make sense to you at first think very carefully about how a NN is setup.) Applying the chain rule directly extended to n variables:
δE(z'1,z'2,..,z'n)/δwji = ΣkδE/δz'kδz'k/δwji
that is the most important step, the author then applies the chain rule again, this time within the sum to expand the δz'k/δwji term, that is
δz'k/δwji = δz'k/δojδoj/δzjδzj/δwji.
If you have difficulties understanding the chain rule, you may need to take a course on multivariate calculus, or read such a section in a textbook.
Good luck.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With