Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Customizing the convolution layer in caffe windows cpp

I have this net 'RGB2GRAY.prototxt':

name: "RGB2GRAY"
layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param { shape: { dim: 1 dim: 3 dim: 512 dim: 512 } }
}

layer {
    name: "conv1"
    bottom: "data"
    top: "conv1"
    type: "Convolution"
    convolution_param {
        num_output: 1
        kernel_size: 1
        pad: 0
        stride: 1
        bias_term: false
        weight_filler {
        type: "constant"
        value: 1
        }
    }
}

I am trying my own network which converts RGB to Gray using this formula

x = 0.299r + 0.587g + 0.114b.

so basically, I can do convolution with kernel size of 1 with customized weights of (0.299, 0.587, 0.114). but I am not getting how to modify the convolution layer. I have set the weights and bias but not able to modify the filter value. I have tried the approach below but it fails to update the convolution filter.

shared_ptr<Net<float> > net_;
net_.reset(new Net<float>("path of model file", TEST));

const shared_ptr<Blob<float> >& conv_blob = net_->blob_by_name("conv1");
float* conv_weight = conv_blob->mutable_cpu_data();
conv_weight[0] =  0.299;
conv_weight[1] =  0.587;
conv_weight[2] =  0.114;

net_->Forward();

//for dumping the output
const shared_ptr<Blob<float> >& probs = net_->blob_by_name("conv1");
const float* probs_out = probs->cpu_data();

cv::Mat matout(height, width, CV_32F);

for (size_t i = 0; i < height; i++)
{
    for (size_t j = 0; j < width; j++)
    {
        matout.at<float>(i, j) = probs_out[i* width + j];
    }

}
matout.convertTo(matout, CV_8UC1);
cv::imwrite("gray.bmp", matout);

In python, I found it easier to customize the convolution filter but I need the solution in C++.

like image 887
AnkitSahu Avatar asked Oct 30 '22 15:10

AnkitSahu


1 Answers

Just in your c++ code make a small change:

// access the convolution layer by its name
const shared_ptr<Layer<float> >& conv_layer = net_->layer_by_name("conv1");
// access the layer's blob that stores weights
shared_ptr<Blob<float> >& weight = conv_layer->blobs()[0];
float* conv_weight = weight->mutable_cpu_data();
conv_weight[0] =  0.299;
conv_weight[1] =  0.587;
conv_weight[2] =  0.114;

In fact, "conv1" refers to the convolution layer's output blob in your code, not the blob containing weights and Net<Dtype>::blob_by_name(const string& blob_name)'s function is to return the blob that stores intermediate results between the layers within a network.

like image 155
Dale Avatar answered Nov 15 '22 07:11

Dale