Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Accuracy issue in caffe

I have a network which has 4 Boolean outputs. It is not a classification problem and each of them are meaningful. I expect to get a zero or one for each of them. Right now I have used the Euclidean loss function.

There are 1000000 samples. In the input file, each of them have 144 features, so there the size of the input is 1000000*144. I have used batch size of 50, because otherwise the processing time is too much. The output file is of the size 1000000*4, i.e. there are four output per each input.

When I am using the accuracy layer, it complains about the dimension of output. It needs just one Boolean output, not four. I think it is because it considers the problem as a classification problem. I have two questions. First, considering the error of the accuracy layer, is the Euclidean loss function suitable for this task? And How I can get the accuracy for my network? Second,I gonna get the exact value of the predicted output for each of the four variable. I mean I need the exact predicted values for each test record. Now, I just have the loss value for each batch. Please guide me to solve those issues.

Thanks, Afshin

The train network is:

{ state {
 phase: TRAIN
}
 layer {
   name: "abbas"
   type: "HDF5Data"
   top: "data"
   top: "label"
   hdf5_data_param {
     source: "/home/afo214/Research/hdf5/simulation/Train-1000-11-     1/Train-Sc-B-1000-11-1.txt"
     batch_size: 50
   } 
 }

 layer {
   name: "ip1"
   type: "InnerProduct"
   bottom: "data"
   top: "ip1"
   inner_product_param {
     num_output: 350
     weight_filler {
       type: "xavier"
     }
   }
 }

 layer {
   name: "sig1"
   bottom: "ip1"
   top: "sig1"
   type: "Sigmoid"
 }

 layer {
   name: "ip2"
   type: "InnerProduct"
   bottom: "sig1"
   top: "ip2"
   inner_product_param {
     num_output: 150
     weight_filler {
       type: "xavier"
     }
   }
 }

The test network is also:

       state {
       phase: TEST
       }
        layer {
        name: "abbas"
        type: "HDF5Data"
        top: "data"
        top: "label"
        hdf5_data_param {
         source: "/home/afo214/Research/hdf5/simulation/Train-1000-11-         1/Train-Sc-B-1000-11-1.txt"
          batch_size: 50
         }
        }

        layer {
         name: "ip1"
  type: "InnerProduct"
  bottom: "data"
  top: "ip1"
  inner_product_param {
  num_output: 350
  weight_filler {
   type: "xavier"
  }
 }
 }

  layer {
  name: "sig1"
  bottom: "ip1"
   top: "sig1"
   type: "Sigmoid"
 }


   layer {
     name: "ip2"
     type: "InnerProduct"
     bottom: "sig1"
     top: "ip2"
     inner_product_param {
       num_output: 150
       weight_filler {
         type: "xavier"
       }
     }
   }


   layer {
     name: "sig2"
     bottom: "ip2"
     top: "sig2"
     type: "Sigmoid"
   }


   layer {
     name: "ip4"
     type: "InnerProduct"
     bottom: "sig2"
     top: "ip4"
     inner_product_param {
       num_output: 4
       weight_filler {
         type: "xavier"
       }
     }
   }
   layer {
     name: "accuracy"
       type: "Accuracy"
         bottom: "ip4"
          bottom: "label"
             top: "accuracy"
             }


   layer {
     name: "loss"
     type: "EuclideanLoss"
     bottom: "ip4"
     bottom: "label"
     top: "loss"
   }

And I get this error:

accuracy_layer.cpp:34] Check failed: outer_num_ * inner_num_ ==  bottom[1]->count() (50 vs. 200) Number of labels must match number of   predictions; e.g., if label axis == 1 and prediction shape is (N, C, H,   W), label count (number of labels) must be N*H*W, with integer values in   {0, 1, ..., C-1}.

Without using the accuracy layer caffe gives me the loss value.

like image 708
Afshin Oroojlooy Avatar asked Dec 11 '15 23:12

Afshin Oroojlooy


1 Answers

Should "EuclideanLoss" be used for predicting binary outputs?

If you are trying to predict discrete binary labels then "EuclideanLoss" is not a very good choice. This loss is better suited for regression tasks where you wish to predict continuous values (e.g., estimating coordinated of bounding boxes etc.).
For predicting discrete labels, "SoftmaxWithLoss" or "InfogainLoss" are better suited. Usually, "SoftmaxWithLoss" is used.
For predicting binary outputs you may also consider "SigmoidCrossEntropyLoss".

Why is there an error in the "Accuracy" layer?

In caffe, "Accuracy" layers expects two inputs ("bottom"s): one is a prediction vector and the other is the ground truth expected discrete label. In your case, you need to provide, for each binary output a vector of length 2 with the predicted probabilities of 0 and 1, and a single binary label:

layer {
  name: "acc01"
  type: "Accuracy"
  bottom: "predict01"
  bottom: "label01"
  top: "acc01"
}

In this example you measure the accuracy for a single binary output. The input "predict01" is a two-vector for each example in the batch (for batch_size: 50 the shape of this blob should be 50-by-2).

What can you do?

You are trying to predict 4 different outputs in a single net, therefore, you need 4 different loss and accuracy layers.
First, you need to split ("Slice") the ground truth labels into 4 scalars (instead of a single binary 4-vector):

layer {
  name: "label_split"
  bottom: "label" # name of input 4-vector
  top: "label01"
  top: "label02"
  top: "label03"
  top: "label04"
  type: "Slice"
  slice_param {
    axis: 1
    slice_point: 1
    slice_point: 2
    slice_point: 3
  }
}

Now you have to have a prediction, loss and accuracy layer for each of the binary labels

layer {
  name: "predict01"
  type: "InnerProduct"
  bottom: "sig2"
  top: "predict01"
  inner_product_param {
    num_outout: 2 # because you need to predict 2 probabilities one for False, one for True
    ...
}
layer {
  name: "loss01"
  type: "SoftmaxWithLoss"
  bottom: "predict01"
  bottom: "label01"
  top: "loss01"
}
layer {
  name: "acc01"
  type: "Accuracy"
  bottom: "predict01"
  bottom: "label01"
  top: "acc01"
}

Now you need to replicate these three layer for each of the four binary labels you wish to predict.

like image 146
Shai Avatar answered Nov 19 '22 23:11

Shai