Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the point of the threshold in a perceptron?

I'm having trouble seeing what the threshold actually does in a single-layer perceptron. The data is usually separated no matter what the value of the threshold is. It seems a lower threshold divides the data more equally; is this what it is used for?

like image 419
Hypercube Avatar asked Jul 02 '11 02:07

Hypercube


People also ask

What is the purpose of threshold function?

A threshold transfer function is sometimes used to quantify the output of a neuron in the output layer. Feed-forward networks include Perceptron (linear and non-linear) and Radial Basis Function networks. Feed-forward networks are often used in data mining.

Why is threshold used in the neural network?

Threshold neural networks are highly useful in engineering applications due to their ease of hardware implementation and low computational complexity. However, such threshold networks have non-differentiable activation functions and therefore cannot be trained by standard gradient-based algorithms.

What is meant by Threshold in neural network?

These certain conditions which differ neuron to neuron are called Threshold. For example, if the input X1 into the first neuron is 30 and X2 is 0: This neuron will not fire, since the sum 30+0 = 30 is not greater than the threshold i.e 100.

Does perceptron have a fixed threshold?

In fact, the only way a perceptron can “learn” is to adjust its weights and threshold – those are the only things the perceptron can be said to “know”. The standard model of a perceptron can be modified slightly, as illustrated in Figure 14.44, in order to use a fixed-value threshold comparator.


3 Answers

Actually, you'll just set threshold when you aren't using bias. Otherwise, the threshold is 0.

Remember that, a single neuron divides your input space with a hyperplane. Ok?

Now imagine a neuron with 2 inputs X=[x1, x2], 2 weights W=[w1, w2] and threshold TH. The equation shows how this neuron works:

x1.w1 + x2.w2 = TH

this is equals to:

x1.w1 + x2.w2 - 1.TH = 0

I.e., this is your hyperplane equation that will divides the input space.

Notice that, this neuron just work if you set manually the threshold. The solution is change TH to another weight, so:

x1.w1 + x2.w2 - 1.w0 = 0

Where the term 1.w0 is your BIAS. Now you still can draw a plane in your input space without set manually a threshold (i.e, threshold is always 0). But, in case you set the threshold to another value, the weights will just adapt themselves to adjust equation, i.e., weights (INCLUDING BIAS) absorves the threshold effects.

like image 99
renatopp Avatar answered Nov 11 '22 06:11

renatopp


The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). Neurons with this kind of activation function are also called Artificial neurons or linear threshold units.

like image 40
Patrick Desjardins Avatar answered Nov 11 '22 07:11

Patrick Desjardins


I think I understand now, with help from Daok. I just wanted to add information for other people to find.

The equation for the separator for a single-layer perceptron is

Σwjxj+bias=threshold

This means that if the input is higher than the threshold, or

Σwjxj+bias > threshold, it gets classified into one category, and if

Σwjxj+bias < threshold, it get classified into the other.

The bias and the threshold really serve the same purpose, to translate the line (see Role of Bias in Neural Networks). Being on opposite sides of the equation, though, they are "negatively proportional".

For example, if the bias was 0 and the threshold 0.5, this would be equivalent to a bias of -0.5 and a threshold of 0.

like image 25
Hypercube Avatar answered Nov 11 '22 05:11

Hypercube