Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Finding the height above water level of rocks

I am currently helping a friend working on a geo-physical project, I'm not by any means a image processing pro, but its fun to play around with these kinds of problems. =)

The aim is to estimate the height of small rocks sticking out of water, from surface to top.

The experimental equipment will be a ~10MP camera mounted on a distance meter with a built in laser pointer. The "operator" will point this at a rock, press a trigger which will register a distance along of a photo of the rock, which will be in the center of the image.

The eqipment can be assumed to always be held at a fixed distance above the water.

As I see it there are a number of problems to overcome:

  1. Lighting conditions

    • Depending on the time of day etc., the rock might be brighter then the water or opposite.
    • Sometimes the rock will have a color very close to the water.
    • The position of the shade will move throughout the day.
    • Depending on how rough the water is, there might sometimes be a reflection of the rock in the water.
  2. Diversity

    • The rock is not evenly shaped.
    • Depending on the rock type, growth of lichen etc., changes the look of the rock.

Fortunateness, there is no shortage of test data. Pictures of rocks in water is easy to come by. Here are some sample images: alt text I've run a edge detector on the images, and esp. in the fourth picture the poor contrast makes it hard to see the edges: alt text Any ideas would be greatly appreciated!

like image 553
Theodor Avatar asked Jan 19 '11 13:01

Theodor


2 Answers

I don't think that edge detection is best approach to detect the rocks. Other objects, like the mountains or even the reflections in the water will result in edges.

I suggest that you try a pixel classification approach to segment the rocks from the background of the image:

  • For each pixel in the image, extract a set of image descriptors from a NxN neighborhood centered at that pixel.
  • Select a set of images and manually label the pixels as rock or background.
  • Use the labeled pixels and the respective image descriptors to train a classifier (eg. a Naive Bayes classifier)

Since the rocks tends to have similar texture, I would use texture image descriptors to train the classifier. You could try, for example, to extract a few statistical measures from each color chanel (R,G,B) like the mean and standard deviation of the intensity values.

like image 186
Alceu Costa Avatar answered Oct 04 '22 13:10

Alceu Costa


Pixel classification might work here, but will never yield a 100% accuracy. The variance in the data is really big, rocks have different colours (which are also "corrupted" with lighting) and different texture. So, one must account for global information as well.

The problem you deal with is foreground extraction. There are two approaches I am aware of.

  1. Energy minimization via graph cuts, see e.g. http://en.wikipedia.org/wiki/GrabCut (there are links to the paper and OpenCV implementation). Some initialization ("seeds") should be done (either by a user or by some prior knowledge like the rock is in the center while water is on the periphery). Another variant of input is an approximate bounding rectangle. It is implemented in MS Office 2010 foreground extraction tool. The energy function of possible foreground/background labellings enforces foreground to be similar to the foreground seeds, and a smooth boundary. So, the minimum of the energy corresponds to the good foreground mask. Note that with pixel classification approach one should pre-label a lot of images to learn from, then segmentation is done automatically, while with this approach one should select seeds on each query image (or they are chosen implicitly).

  2. Active contours a.k.a. snakes also requre some user interaction. They are more like Photoshop Magic Wand tool. They also try to find a smooth boundary, but do not consider the inner area.

Both methods might have problems with the reflections (pixel classification will definitely have). If it is the case, you may try to find an approximate vertical symmetry, and delete the lower part, if any. You can also ask a user to mark the reflaction as a background while collecting stats for graph cuts.

like image 36
Roman Shapovalov Avatar answered Oct 04 '22 14:10

Roman Shapovalov