Caffe has a layer type "Python"
.
For instance, this layer type can be used as a loss layer.
On other occasions it is used as an input layer.
What is this layer type?
How can this layer be used?
A layer is a data-processing module that takes as input one or more tensors and that outputs one or more tensors. Some layers are stateless, but more frequently layers have a state: the layer's weights, one or several tensors learned with stochastic gradient descent, which together contain the network's knowledge.
Prune's and Bharat's answers gives the overall purpose of a "Python"
layer: a general purpose layer which is implemented in python rather than c++.
I intend this answer to serve as a tutorial for using "Python"
layer.
"Python"
layer"Python"
layer?Please see the excellent answers of Prune and Bharat.
In order to use 'Python"
layer you need to compile caffe with flag
WITH_PYTHON_LAYER := 1
set in 'Makefile.config'
.
"Python"
layer?A "Python"
layer should be implemented as a python class derived from caffe.Layer
base class. This class must have the following four methods:
import caffe
class my_py_layer(caffe.Layer):
def setup(self, bottom, top):
pass
def reshape(self, bottom, top):
pass
def forward(self, bottom, top):
pass
def backward(self, top, propagate_down, bottom):
pass
What are these methods?
def setup(self, bottom, top)
: This method is called once when caffe builds the net. This function should check that number of inputs (len(bottom)
) and number of outputs (len(top)
) is as expected.
You should also allocate internal parameters of the net here (i.e., self.add_blobs()
), see this thread for more information.
This method has access to self.param_str
- a string passed from the prototxt to the layer. See this thread for more information.
def reshape(self, bottom, top)
: This method is called whenever caffe reshapes the net. This function should allocate the outputs (each of the top
blobs). The outputs' shape is usually related to the bottom
s' shape.
def forward(self, bottom, top)
: Implementing the forward pass from bottom
to top
.
def backward(self, top, propagate_down, bottom)
: This method implements the backpropagation, it propagates the gradients from top
to bottom
. propagate_down
is a Boolean vector of len(bottom)
indicating to which of the bottom
s the gradient should be propagated.
Some more information about bottom
and top
inputs you can find in this post.
Examples
You can see some examples of simplified python layers here, here and here.
Example of "moving average" output layer can be found here.
Trainable parameters"Python"
layer can have trainable parameters (like "Conv"
, "InnerProduct"
, etc.).
You can find more information on adding trainable parameters in this thread and this one. There's also a very simplified example in caffe git.
"Python"
layer in a prototxt?See Bharat's answer for details.
You need to add the following to your prototxt:
layer {
name: 'rpn-data'
type: 'Python'
bottom: 'rpn_cls_score'
bottom: 'gt_boxes'
bottom: 'im_info'
bottom: 'data'
top: 'rpn_labels'
top: 'rpn_bbox_targets'
top: 'rpn_bbox_inside_weights'
top: 'rpn_bbox_outside_weights'
python_param {
module: 'rpn.anchor_target_layer' # python module name where your implementation is
layer: 'AnchorTargetLayer' # the name of the class implementation
param_str: "'feat_stride': 16" # optional parameters to the layer
}
}
"Python"
layer using pythonic NetSpec
interface?It's very simple:
import caffe
from caffe import layers as L
ns = caffe.NetSpec()
# define layers here...
ns.rpn_labels, ns.rpn_bbox_targets, \
ns.rpn_bbox_inside_weights, ns.rpn_bbox_outside_weights = \
L.Python(ns.rpn_cls_score, ns.gt_boxes, ns.im_info, ns.data,
name='rpn-data',
ntop=4, # tell caffe to expect four output blobs
python_param={'module': 'rpn.anchor_target_layer',
'layer': 'AnchorTargetLayer',
'param_str': '"\'feat_stride\': 16"'})
"Python"
layer?Invoking python code from caffe is nothing you need to worry about. Caffe uses boost API to call python code from compiled c++.
What do you do need to do?
Make sure the python module implementing your layer is in $PYTHONPATH
so that when caffe import
s it - it can be found.
For instance, if your module my_python_layer.py
is in /path/to/my_python_layer.py
then
PYTHONPATH=/path/to:$PYTHONPATH $CAFFE_ROOT/build/tools/caffe train -solver my_solver.prototxt
should work just fine.
You should always test your layer before putting it to use.
Testing the forward
function is entirely up to you, as each layer has a different functionality.
Testing the backward
method is easy, as this method only implements a gradient of forward
it can be numerically tested automatically!
Check out test_gradient_for_python_layer
testing utility:
import numpy as np
from test_gradient_for_python_layer import test_gradient_for_python_layer
# set the inputs
input_names_and_values = [('in_cont', np.random.randn(3,4)),
('in_binary', np.random.binomial(1, 0.4, (3,1))]
output_names = ['out1', 'out2']
py_module = 'folder.my_layer_module_name'
py_layer = 'my_layer_class_name'
param_str = 'some params'
propagate_down = [True, False]
# call the test
test_gradient_for_python_layer(input_names_and_values, output_names,
py_module, py_layer, param_str,
propagate_down)
# you are done!
It is worth while noting that python code runs on CPU only. Thus, if you plan to have a Python layer in the middle of your net you will see a significant degradation in performance if you plan on using GPU. This happens because caffe needs to copy blobs from GPU to CPU before calling python layer and then copy back to GPU to proceed with the forward/backward pass.
This degradation is far less significant if the python layer is either an input layer or the topmost loss layer.
Update: On Sep 19th, 2017 PR #5904 was merged into master. This PR exposes GPU pointers of blobs via the python interface.
You may access blob._gpu_data_ptr and blob._gpu_diff_ptr directly from python at your own risk.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With