Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does a back-propagation training algorithm work?

I've been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect.

How does back-propagation work? How does it learn from a training dataset provided? I will have to code this, but until then I need to gain a stronger understanding of it.

like image 686
unleashed Avatar asked Jan 26 '12 18:01

unleashed


People also ask

How does back propagation algorithm work?

The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic ...

What is back propagation training algorithm?

Backpropagation, or backward propagation of errors, is an algorithm that is designed to test for errors working back from output nodes to input nodes. It is an important mathematical tool for improving the accuracy of predictions in data mining and machine learning.

How is the training algorithm performed in back propagation neural networks?

The algorithm is used to effectively train a neural network through a method called chain rule. In simple terms, after each forward pass through a network, backpropagation performs a backward pass while adjusting the model's parameters (weights and biases).

What are the steps in back propagation algorithm?

Backpropagation Algorithm:Step 1: Inputs X, arrive through the preconnected path. Step 2: The input is modeled using true weights W. Weights are usually chosen randomly. Step 3: Calculate the output of each neuron from the input layer to the hidden layer to the output layer.


2 Answers

Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.

In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must fix their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their weights and other parameters using the weighted sum of errors from the layer ahead.

A step-by-step demo of feed-forward and back-propagation steps can be found here.


Edit

If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.

like image 134
Sufian Latif Avatar answered Oct 07 '22 02:10

Sufian Latif


High-level description of the backpropagation algorithm

Backpropagation is trying to do a gradient descent on the error surface of the neural network, adjusting the weights with dynamic programming techniques to keep the computations tractable.

I will try to explain, in high-level terms, all the just mentioned concepts.

Error surface

If you have a neural network with, say, N neurons in the output layer, that means your output is really an N-dimensional vector, and that vector lives in an N-dimensional space (or on an N-dimensional surface.) So does the "correct" output that you're training against. So does the difference between your "correct" answer and the actual output.

That difference, with suitable conditioning (especially some consideration of absolute values) is the error vector, living on the error surface.

Gradient descent

With that concept, you can think of training the neural network as the process of adjusting the weights of your neurons so that the error function is small, ideally zero. Conceptually, you do this with calculus. If you only had one output and one weight, this would be simple -- take a few derivatives, which would tell you which "direction" to move, and make an adjustment in that direction.

But you don't have one neuron, you have N of them, and substantially more input weights.

The principle is the same, except instead of using calculus on lines looking for slopes that you can picture in your head, the equations become vector algebra expressions that you can't easily picture. The term gradient is the multi-dimensional analogue to slope on a line, and descent means you want to move down that error surface until the errors are small.

Dynamic programming

There's another problem, though -- if you have more than one layer, you can't easily see the change of the weights in some non-output layer vs the actual output.

Dynamic programming is a bookkeeping method to help track what's going on. At the very highest level, if you naively try to do all this vector calculus, you end up calculating some derivatives over and over again. The modern backpropagation algorithm avoids some of that, and it so happens that you update the output layer first, then the second to last layer, etc. Updates are propagating backwards from the output, hence the name.

So, if you're lucky enough to have been exposed to gradient descent or vector calculus before, then hopefully that clicked.

The full derivation of backpropagation can be condensed into about a page of tight symbolic math, but it's hard to get the sense of the algorithm without a high-level description. (It's downright intimidating, in my opinion.) If you haven't got a good handle on vector calculus, then, sorry, the above probably wasn't helpful. But to get backpropagation to actually work, it's not necessary to understand the full derivation.


I found the following paper (by Rojas) very helpul, when I was trying to understand this material, even if it's a big PDF of one chapter of his book.

http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

like image 40
Novak Avatar answered Oct 07 '22 01:10

Novak