I currently study the Neural Networks theory and I see that everywhere it is written that it consists of the following layers:
I see some graphical descriptions that show the input layer as real nodes in the net, while others show this layer as just a vector of values [x1, x2, ... xn]
What is the correct structure?
Is the "input layer" a real layer of neurons? Or is this just abstractly named as layer, while it really is just the input vector?
Adding contradicting and confusing photos I found in the web:
Here it looks like the input layer consists of neurons:
Here it looks like the input layer is just an input vector:
Is the "input layer" a real layer of neurons? Or is this just abstractly named as layer, while it really is just the input vector?
Yes, it's both - depending on the abstraction. On paper the network has input neurons. On implementation level you have to organize this data (usually using arrays/vectors) which is why you speak of an input vector:
An input vector holds the input neuron values (representing the input layer).
If you're familiar with basics of graph theory or image processing - it's the same principle. For example, you can call an image a matrix (technical view) or a field of pixels (more abstract view).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With