Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to train Tensorflow Object Detection images that do not contain objects?

I am training an object detection network using Tensorflow's object detection,

https://github.com/tensorflow/models/tree/master/research/object_detection

I can successfully train a network based on my own images and labels. However, I have a large dataset of images that do not contain any of my labeled objects, and I want to be able to train the network to not detect anything in these images.

From what I understand with Tensorflow object detection, I need to give it a set of images and corresponding XML files that box and label the objects in the image. The scripts convert the XML to CSV and then to another format for the training, and do not allow XML files that have no objects.

How to give an image and XML files that have no objects?

Or, how does the network learn what is not an object?

For example if you want to detect "hot dogs" you can train it with a set of images with hot dogs. But how to train it what is not a hot dog?

like image 777
James Avatar asked Nov 08 '18 21:11

James


People also ask

How many images do you need to train object detection?

For each label you must have at least 10 images, each with at least one annotation (bounding box and the label). However, for model training purposes it's recommended you use about 1000 annotations per label. In general, the more images per label you have the better your model will perform.

How do you train a custom object detection model?

Before you can train your custom object detector, you must convert your data into the TFRecord format. Since we need to train as well as validate our model, the data set will be split into training ( train. record ) and validation sets ( val. record ).


2 Answers

An Object Detection CNN can learn what is not an object, simply by letting it see examples of images without any labels.

There are two main architecture types:

  1. two-stages, with first stage object/region proposal (RPN), and second - classification and bounding box fine-tuning;
  2. one-stage, which directly classifies and regresses BB based on the feature vector corresponding to a certain cell in the feature map.

In any case, there's a part which is responsible to decide what is an object and what's not. In RPN you have "objectness" score, and in one-stages there's the confidence of classification, where you usually a background class (i.e. everything which is not the supported classes).

So in both cases, in case a specific example in an image doesn't have any supported class, you teach the CNN to decrease the objectness score or increase the background confidence correspondingly.

like image 114
netanel-sam Avatar answered Oct 12 '22 06:10

netanel-sam


You might want to take a look at this solution.

For for the tensorflow object detection API to include your negative examples, you need to add the negative examples to the csv file you have created from the xml, either by modifying the script that generates the csv file or by adding the examples afterwards.

For generating xml-files without class labels using LabelImg, you can do this by pressing "Verify Image".

like image 37
Jurgz Avatar answered Oct 12 '22 07:10

Jurgz