Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does a Neural Network "remember" what its learned?

Im trying to wrap my head around understanding neural networks and from everything I've seen, I understand that they are made up of layers created by nodes. These nodes are attached to each other with "weighted" connections, and by passing values through the input layer, the values travel through the nodes, changing their values dependent on the "weight" of the connections (right?). Eventually they reach the output layer with a value. I understand the process but I don't see how this leads to the network being trained. Does the network remember a pattern between weighted connections? How does it remember that pattern?

like image 338
Carrot2472car Avatar asked Dec 16 '18 00:12

Carrot2472car


People also ask

Do neural networks have memory?

The main difference between the functioning of neural networks and the biological neural network is memory. While both the human brain and neural networks have the ability to read and write from the memory available, the brain can create/store the memory as well.

Can a neural network learn itself?

"Yes, neural network computers can learn from experience. Their inherent ability to learn 'on the fly' is one of the primary reasons researchers are excited and optimistic about their future.

How does RNN remember?

An RNN remembers each and every information through time. It is useful in time series prediction only because of the feature to remember previous inputs as well. This is called Long Short Term Memory. Recurrent neural network are even used with convolutional layers to extend the effective pixel neighborhood.

Can a neural network learn anything?

Just like every other supervised machine learning model, neural networks learn relationships between input variables and output variables. In fact, we can even see how it's related to the most iconic model of all, linear regression.


2 Answers

Each weight and bias on each node is like a stored variable. As new data causes its weights and biases to change, these variables change. Eventually a trained algorithm is done and the weights and biases don't need to change anymore. You can then store the information about the all the nodes, weights, biases and connections however you like. This information is your model. So the "remembering" is just the values of the weights and biases.

like image 77
Caleb Macdonald Black Avatar answered Nov 15 '22 09:11

Caleb Macdonald Black


Neural network remembers what its learned through its weights and biases. Lets explain it with a binary classification example. During forward propagation, the value computed is the probability(say p) and actual value is y. Now, loss is calculated using the formula:-> -(ylog(p) + (1-y)log(1-p)). Once the loss is calculated, this info is propagated backwards and corresponding derivatives of weights and biases are calculated using this loss. Now weights and biases are adjusted according to these derivatives. In one epoch, all the examples present are propagated and weights and biases are adjusted. Then, same examples are propagated forward and backward and correspondingly in each step, weights and biases are adjusted. Finally, after minimizing the loss to a good extent or, achieving a high accuracy (make sure not to overfit), we can store the value of weights and biases and this is what neural network has learned.

like image 43
Kenpachi Zaraki Avatar answered Nov 15 '22 09:11

Kenpachi Zaraki