I've updated to Tensorflow 1.9 & the latest master of the Object Detection API. When running a training/evaluation session that worked fine previously (I think version 1.6), the training appears to proceed as expected, but I only get evaluation & metrics for one image (the first).
In Tensorboard the image is labeled 'Detections_Left_Groundtruth_Right'. The evaluation step itself also happens extremely quickly, which leads me to believe this isn't just a Tensorboard issue.
Looking in model_lib.py, I see some suspicious code (near line 349):
eval_images = (
features[fields.InputDataFields.original_image] if use_original_images
else features[fields.InputDataFields.image])
eval_dict = eval_util.result_dict_for_single_example(
eval_images[0:1],
features[inputs.HASH_KEY][0],
detections,
groundtruth,
class_agnostic=class_agnostic,
scale_to_absolute=True)
This reads to me like the evaluator is always running a single evaluation on the first image. Has anyone seen and/or fixed this? I will update if changing the above works.
You are right, object detection supports only batch sizes of 1 for evaluation. The number of evaluations is equal to the number of eval steps. Eval metrics are accrued across batches.
Btw, a change to view more eval images in Tensorboard was just submitted to master.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With