By setting the bottom and the top blob to be the same we can tell Caffe to do "in-place" computation to preserve memory consumption.
Currently I know I can safely use in-place "BatchNorm"
, "Scale"
and "ReLU"
layers (please let me know if I'm wrong). While it seems to have some issues for other layers (this issue seems to be an example).
When to use in-place layers in Caffe?
How does it work with back-propagation?
As you well noted, in-place layers don't usually work "out of the box".
For some layers, it is quite trivial ("ReLU"
and other neuron activation layers).
However, for others it requires special handling in code. For example, the implementation of "PReLU"
layer has specific cache bottom_memory_
member variable that stores information needed for backprop.
You can see similar code for other layers that specifically test for if (top[0] == bottom[0])
to see if the layer is used in an "in-place" case.
Moreover, it makes little sense to have an in-place layer for which the input and output are of different shapes, thus layers such as "Convolution"
, "InnerProduct"
, "Pool"
are not considered as candidates for "in-place" layers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With