I want to detect objects inside cells of microscopy images. I have a lot of annotated images (app. 50.000 images with an object and 500.000 without an object).
So far I tried to extract features using HOG and classifying using logistic regression and LinearSVC. I have tried several parameters for HOG or color spaces (RGB, HSV, LAB) but I don't see a big difference, the predication rate is about 70 %.
I have several questions. How many images should I use to train the descriptor? How many images should I use to test the prediction?
I have tried with about 1000 images for training, which gives me 55 % positive and 5000, which gives me about 72 % positive. However, it also depends a lot on the test set, sometimes a test set can reach 80-90 % positive detected images.
Here are two examples containing an object and two images without an object:
Another problem is, sometimes the images contain several objects:
Should I try to increase the examples of the learning set? How should I choose the images for the training set, just random? What else could I try?
Any help would be very appreciated, I just started to discover machine learning. I am using Python (scikit-image & scikit-learn).
Histogram of Oriented Gradients, also known as HOG, is a feature descriptor like the Canny Edge Detector, SIFT (Scale Invariant and Feature Transform) . It is used in computer vision and image processing for the purpose of object detection.
Object detection is a computer vision technique for locating instances of objects in images or videos. Object detection algorithms typically leverage machine learning or deep learning to produce meaningful results.
The HOG features are widely use for object detection. HOG decomposes an image into small squared cells, computes an histogram of oriented gradients in each cell, normalizes the result using a block-wise pattern, and return a descriptor for each cell.
I think you're in the right path, but let me raise some considerations:
1 - The number of the training set would be always important in classification problems (usually, more is better). However, you must have good annotations and your method should be robust to outliers.
2 - From the images that you put, it seems that the color histogram would be more discriminative than HOG. When using color histograms I usually go for Lab color space with correlated a-b histograms. L is luminance and is very dependent on image acquisition (e.g. brightness). One method that is used to re-identify pedestrian is to divide the images into blocks and compute the histograms inside these blocks. This can be helpful.
3 - The best way to test your classification method is cross-validation: http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29#k-fold_cross-validation
4 - Did you try other classifiers? Weka could be very interesting to easily tests different methods/parameters: http://www.cs.waikato.ac.nz/ml/weka/
5 - Finally, if you still have bad results and have no idea of which kind of features you should be using, you can apply deep neural networks to it!
Hope it helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With