Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get the output from YOLO model using tensorflow with C++ correctly?

I'm trying to write an inference program with YOLO model in C++. I've searched for some info about darknet, but it has to use .cfg file to import the model structure(which is a bit too complicated for me...), thus I want to do the program with tensorflow.

(My model weight is converted from .hdf5(used in python) to .pb(used in C++))

I've found some examples written in python, it seems like they have done some work before the inference process... Source

def yolo_eval(yolo_outputs,
              anchors,
              num_classes,
              image_shape,
              max_boxes=50,
              score_threshold=.6,
              iou_threshold=.5):

    """Evaluate YOLO model on given input and return filtered boxes."""
    num_layers = len(yolo_outputs)
    anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]] # default setting
    input_shape = K.shape(yolo_outputs[0])[1:3] * 32
    boxes = []
    box_scores = []
    for l in range(num_layers):
        _boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l],
            anchors[anchor_mask[l]], num_classes, input_shape, image_shape)
        boxes.append(_boxes)
        box_scores.append(_box_scores)
    boxes = K.concatenate(boxes, axis=0)
    box_scores = K.concatenate(box_scores, axis=0)

    mask = box_scores >= score_threshold
    max_boxes_tensor = K.constant(max_boxes, dtype='int32')
    boxes_ = []
    scores_ = []
    classes_ = []
    for c in range(num_classes):
        # TODO: use keras backend instead of tf.
        class_boxes = tf.boolean_mask(boxes, mask[:, c])
        class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c])
        nms_index = tf.image.non_max_suppression(
            class_boxes, class_box_scores, max_boxes_tensor, iou_threshold=iou_threshold)
        class_boxes = K.gather(class_boxes, nms_index)
        class_box_scores = K.gather(class_box_scores, nms_index)
        classes = K.ones_like(class_box_scores, 'int32') * c
        boxes_.append(class_boxes)
        scores_.append(class_box_scores)
        classes_.append(classes)
    boxes_ = K.concatenate(boxes_, axis=0)
    scores_ = K.concatenate(scores_, axis=0)
    classes_ = K.concatenate(classes_, axis=0)

    return boxes_, scores_, classes_

I've printed out the return value and it looks like this

boxes-> Tensor("concat_11:0", shape=(?, 4), dtype=float32)

scores-> Tensor("concat_12:0", shape=(?,), dtype=float32)

classes-> Tensor("concat_13:0", shape=(?,), dtype=int32)

the original output of my YOLO model(.hdf5) is (I got this by printed out model.output)

tf.Tensor 'conv2d_59_1/BiasAdd:0' shape=(?, ?, ?, 21) dtype=float32

tf.Tensor 'conv2d_67_1/BiasAdd:0' shape=(?, ?, ?, 21) dtype=float32

tf.Tensor 'conv2d_75_1/BiasAdd:0' shape=(?, ?, ?, 21) dtype=float32

And the inference part of the python code is

out_boxes, out_scores, out_classes = sess.run(
                                    [boxes, scores, classes],
                                    feed_dict={
                                        yolo_model.input: image_data,
                                        input_image_shape: [image.size[1], image.size[0]],
                                        K.learning_phase(): 0
                                    })

Compare to the python version of inference code, C++ part is... (Reference)

int main()
{
    string image = "test.jpg";
    string graph = "yolo_weight.pb";
    string labels = "coco.names";
    int32 input_width = 416;
    int32 input_height = 416;
    float input_mean = 0;
    float input_std = 255;
    string input_layer = "input_1:0";
    std::vector<std::string> output_layer = {"conv2d_59/BiasAdd:0", "conv2d_67/BiasAdd:0", "conv2d_75/BiasAdd:0" };

    std::unique_ptr<tensorflow::Session> session;
    string graph_path = tensorflow::io::JoinPath(root_dir, graph);
    Status load_graph_status = LoadGraph(graph_path, &session);

    std::vector<Tensor> resized_tensors;
    string image_path = tensorflow::io::JoinPath(root_dir, image);
    Status read_tensor_status = ReadTensorFromImageFile(image_path, input_height, input_width, 
    input_mean, input_std, &resized_tensors);

    Tensor inpTensor = Tensor(DT_FLOAT, TensorShape({ 1, input_height, input_width, 3 }));
    std::vector<Tensor> outputs;
    cv::Mat srcImage = cv::imread(image);
    cv::resize(srcImage, srcImage, cv::Size(input_width, input_height));
    srcImage.convertTo(srcImage, CV_32FC3);
    srcImage = srcImage / 255;  
    string ty = type2str(srcImage.type());
    float *p = (&inpTensor)->flat<float>().data();
    cv::Mat tensorMat(input_height, input_width, CV_32FC3, p);
    srcImage.convertTo(tensorMat, CV_32FC3);
    Status run_status = session->Run({{ input_layer, inpTensor }}, { output_layer }, {}, &outputs);
    int cc = 1;
    auto output_detection_class = outputs[0].tensor<float, 4>();
    std::cout << "detection scores" << std::endl;
    std::cout << "typeid(output_detection_scoreclass).name->" << typeid(output_detection_class).name() << std::endl;
    for (int i = 0; i < 13; ++i)
    {
        for (int j = 0; j < 13; ++j)
        {
            for (int k = 0; k < 21; ++k)
            {
                // using (index_1, index_2, index_3) to access the element in a tensor
                printf("i->%d, j->%d, k->%d\t", i, j, k);
                std::cout << output_detection_class(1, i, j, k) << "\t";
                cc += 1;
                if (cc % 4 == 0)
                {
                    std::cout << "\n";
                }
            }
        }
        std::cout << std::endl;
    }
    return 0;
}

The output of c++ version inference part is

outputs.size()-> 3

outputs[0].shape()-> [1,13,13,21]

outputs[1].shape()-> [1,26,26,21]

outputs[2].shape()-> [1,52,52,21]

But the output I get is pretty weird...

(The output value of outputs[0] doesn't seems like any one of score, class, or coordinates...) Yolo output result

So I'm wondering is it because I miss the part written in python before its inference? Or I use the wrong way to get my output data?

I've checked some related questions and answers...

1.Yolo v3 model output clarification with keras

2.Convert YoloV3 output to coordinates of bounding box, label and confidence

3.How to access tensorflow::Tensor C++

But I still can't figure out how to make it :(

I also found a repo which might be helpful, I've taken a look at its yolo.cpp, but its model output tensor's shape is different from mine, I'm not sure if I can revise the code directly, its output tensor is

tf.Tensor 'import/output:0' shape=(?, 735) dtype = float32

Any help or advice is appreciated...

like image 375
Coco Yen Avatar asked Nov 07 '22 10:11

Coco Yen


1 Answers

In case you're still struggling with this, I don't see where you are applying the Sigmoid and Exp to the output layer values.

You might look at this paper, which describes how to handle the output.

https://medium.com/analytics-vidhya/yolo-v3-theory-explained-33100f6d193

like image 111
Bryan Greenway Avatar answered Nov 14 '22 21:11

Bryan Greenway