Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to evaluate a pretrained model in Tensorflow object detection api

Trying work with the recently released Tensorflow Object Detection API, and was wondering how I could evaluate one of the pretrained models they provided in their model zoo? ex. how can I get the mAP value for that pretrained model?

Since the script they've provided seems to use checkpoints (according to their documentation) I've tried making a dumb copy of a checkpoint that pointed to the provided model.ckpt.data-00000-of-00001 model in their model zoo, but eval.py didn't like that.

checkpoint
   model_checkpoint_path: "model.ckpt.data-00000-of-00001"

I've considered training on the pretrained one briefly then evaluating that... but I'm not sure if this would give me the right metric.

Sorry if this is a rudimentary question - I'm just starting out on Tensorflow and wanted to verify I was getting the right stuff. Would appreciate any pointers!

EDIT:

I made a checkpoint file as per Jonathan's answer:

model_checkpoint_path: "model.ckpt"
all_model_checkpoint_paths: "model.ckpt"

which the evaluation script took, and evaluated using the COCO dataset. However the evaluation stopped and said there was a shape mismatch:

...
[[Node: save/Assign_19 = Assign[T=DT_FLOAT, _class=["loc:@BoxPredictor_4/ClassPredictor/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](BoxPredictor_4/ClassPredictor/weights, save/RestoreV2_19/_15)]]
2017-07-05 18:40:11.969641: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [1,1,256,486] rhs shape= [1,1,256,546]
[[Node: save/Assign_19 = Assign[T=DT_FLOAT, _class=["loc:@BoxPredictor_4/ClassPredictor/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](BoxPredictor_4/ClassPredictor/weights, save/RestoreV2_19/_15)]]
2017-07-05 18:40:11.969725: W tensorflow/core/framework/op_kernel.cc:1158] 
...
Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [1,1,256,486] rhs shape= [1,1,256,546]
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1,256,486] rhs shape= [1,1,256,546]

What might have caused this shape mismatch? And how do I fix it?

like image 581
jaydee713 Avatar asked Jun 22 '17 18:06

jaydee713


People also ask

How do you evaluate an object's detection model?

To evaluate object detection models like R-CNN and YOLO, the mean average precision (mAP) is used. The mAP compares the ground-truth bounding box to the detected box and returns a score. The higher the score, the more accurate the model is in its detections.


2 Answers

You can evaluate the pretrained models by running the eval.py script. It will ask you to point to a config file (which will be in the samples/configs directory) and a checkpoint, and for this you will provide a path of the form .../.../model.ckpt (dropping any extensions, like .meta, or .data-00000-of-00001).

You also have to create a file named "checkpoint" inside the directory that contains that checkpoint that you'd like to evaluate. Then inside that file write the following two lines:

model_checkpoint_path: “path/to/model.ckpt"
all_model_checkpoint_paths: “path/to/model.ckpt"

(where you modify path/to/ appropriately)

The number that you get at the end is mean Average Precision using 50% IOU as the cutoff threshold for true positives. This is slightly different than the metric that is reported in the model zoo, which uses the COCO mAP metric and averages over multiple IOU values.

like image 95
Jonathan Huang Avatar answered Oct 19 '22 16:10

Jonathan Huang


You can also used model_main.py to evaluate your model.

If you want to evaluate your model on validation data you should use:

python models/research/object_detection/model_main.py --pipeline_config_path=/path/to/pipeline_file --model_dir=/path/to/output_results --checkpoint_dir=/path/to/directory_holding_checkpoint --run_once=True

If you want to evaluate your model on training data, you should set 'eval_training_data' as True, that is:

python models/research/object_detection/model_main.py --pipeline_config_path=/path/to/pipeline_file --model_dir=/path/to/output_results --eval_training_data=True --checkpoint_dir=/path/to/directory_holding_checkpoint --run_once=True

I also add comments to clarify some of previous options:

--pipeline_config_path: path to "pipeline.config" file used to train detection model. This file should include paths to the TFRecords files (train and test files) that you want to evaluate, i.e. :

    ...
    train_input_reader: {
        tf_record_input_reader {
                #path to the training TFRecord
                input_path: "/path/to/train.record"
        }
        #path to the label map 
        label_map_path: "/path/to/label_map.pbtxt"
    }
    ...
    eval_input_reader: {
        tf_record_input_reader {
            #path to the testing TFRecord
            input_path: "/path/to/test.record"
        }
        #path to the label map 
        label_map_path: "/path/to/label_map.pbtxt"
    }
    ...

--model_dir: Output directory where resulting metrics will be written, particularly "events.*" files that can be read by tensorboard.

--checkpoint_dir: Directory holding a checkpoint. That is the model directory where checkpoint files ("model.ckpt.*") has been written, either during training process, or after export it by using "export_inference_graph.py". In your case, you should point to the pretrained model folder download from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md.

--run_once: True to run just one round of evaluation.

like image 4
Juan Rodriguez Avatar answered Oct 19 '22 14:10

Juan Rodriguez