How to reshape a blob of the shape N x C x H x W
to N x 1 x (C*H) x W
in Caffe?
I want to make a convolution layer the weights of which are identical between channels.
One way I come up with is to reshape the bottom blob of the shape N x C x H x W
to N x 1 x (C*H) x W
and place a convolution layer upon it. But I just don't know how to reshape a blob.
Please help me out, thank you.
As pointed by whjxnyzh, you can use "Reshape"
layer. Caffe is quite flexible in the way it allows you to define the output shape.
See the declaration of reshap_param
in caffe.proto`:
// Specify the output dimensions. If some of the dimensions are set to 0, // the corresponding dimension from the bottom layer is used (unchanged). // Exactly one dimension may be set to -1, in which case its value is // inferred from the count of the bottom blob and the remaining dimensions.
In your case I guess you'll have a layer like this:
layer {
name: "my_reshape"
type: "Reshape"
bottom: "in"
top: "reshaped_in"
reshape_param { shape: {dim: 0 dim: 1 dim: -1 dim: 0 } }
}
See also on caffe.help.
Caffe now has a reshapeLayer for you. http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ReshapeLayer.html
If I understand your final objective right, Caffe's convolution layer already can do multiple input-output convolution with common/shared filters like:
layer {
name: "conv"
type: "Convolution"
bottom: "in1"
bottom: "in2"
bottom: "in3"
top: "out1"
top: "out2"
top: "out3"
convolution_param {
num_output : 10 #the same 10 filters for all 3 inputs
kernel_size: 3
}
}
Assuming you have all streams split (slice layer can do that), and finally you may merge them if desired with a concat or eltwise layer.
That avoid the needs of reshaping blob, convolved, and then reshaping it back, which might introduce cross-channel interference near the margins.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With