I am currently trying understand what weight sharing actually is in convolutional neural network.
From what I can read was CNN first introduced as a way to reduce the number of connection required to connect input and output given an input had 3 dimensions.
Following that logic it made sense for me that convolution reduces of one of these dimensions, and connect the convoluted image to an output neuron.
Whether the weight connecting the convoluted image and the output are the weight that not shared? And if not what are the weights that are not shared?
Or if yes does back propagation then view it as one weight, and update it as one unit?
An illustration of what I think weight sharing:

A CNN has multiple layers. Weight sharing happens across the receptive field of the neurons (filters) in a particular layer. Weights are the numbers within each filter. So essentially we are trying to learn a filter. These filters act on a certain receptive field/small section of the image. When the filter moves through the image, the filter does not change. The idea being, if an edge is important to learn in a particular part of an image, it is important in other parts of the image too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With