Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Filter fluctuating lighting with OpenCV

Tags:

c++

opencv

vision

I need to do fairly sensitive color (brightness) measurements in webcam footage, using OpenCV. The problem I am experiencing is that the ambient light fluctuates, which makes it hard to get accurate results. I'm looking for a way to continuously update sequential frames of the video to smooth out the global lighting differences. The light changes I'm trying to filter out occur globally in most or all of the image. I have tried to calculate a difference and subtract that, but with little luck. Does anyone have any advice on how to approach this problem?

EDIT: The 2 images below are from the same video, with color changes slightly magnified. If you alternate between them, you'll see that there's slight changes in lighting, probably due to clouds shifting outside. The problem is that these changes obscure any other color changes I might want to detect.

So I would like to filter out these particular changes. Given that I only need part of the frames I capture, I figured that it should be possible to filter out the lighting changes as they occur in the rest of the footage as well. Outside of my area of interest.

I have tried to capture the dominant frequencies in the changes using dft, to simply ignore changes in lighting. But I am not familiar enough with the use of that function. I have only been using opencv for a week, so I am still learning.

enter image description here enter image description here

like image 680
FHannes Avatar asked Jun 28 '16 12:06

FHannes


3 Answers

Short answer: temporal low-pass filter on illumination as a whole

Consider the illumination, conceptually, as a time sequence of values representing something like the light flux impinging upon the scene being photographed. Your ideal situation is that this function be constant, but the second-best situation is that it vary as slowly as possible. A low-pass filter changes a function that can vary rapidly to one that varies more slowly. The basic steps are thus: (1) Calculate a total illumination function (2) Compute a new illumination function using a low-pass filter (3) Normalize the original image sequence to the new illumination values.

(1) The simplest way of calculating an illumination function is to add up all the luminance values for each pixel in the image. In simple cases, this might even work; you might guess from my tone that there are a number of caveats.

An important issue is that you'd prefer to add up illumination values not in some color space (such as HSV) but rather some physical measure of illumination. Going back from a color space to the actual light in the room requires data that's not in an image, such as the spectral reflectivity of each surface in the image, so that's unlikely. As a proxy for this, you can use only part of the image, one that has a consistent reflectivity. In the sample images, the desk surface at the top of the image could be use. Select a geometric region and compute a total illumination number from that.

Related to this, if you have regions of the image where the camera has saturated, you've lost a lot of information and the total illumination value won't relate well to the physical illumination. Simply cut out any such regions (but do it consistently across all frames).

(2) Compute a low-pass filter on the illumination function. These transforms are a fundamental part of every signal processing package. I don't know enough about OpenCV to know if it's got appropriate function itself, so you might need another library. There are lots of different kinds of low-pass filters, but they should all give you similar results.

(3) Once you've got a low-pass time series, you want to use it as a normalization function for the total illumination. Compute the average value of the low-pass series and divide by it, yielding a time series with average value 1. Now transform each image by multiplying the illumination in the image by the normalization factor. All the warnings about working ideally in a physical illumination space and not a color space apply.

like image 87
eh9 Avatar answered Nov 02 '22 18:11

eh9


If signal change is global you should try to calculate mean m(i,t) for each line i in each image at time t in your video. Without fluctuating light ratio m(i,t)/m(i,t+1) must be 1 for all time. If there is a global change then for each i m(i,t)/m(i,t+1) must be constant. it's better to use mean value m(i,t)/m(i,t+1) (for all i). This mean value could be use to correct your image at time t.

You can work with ratio like m(i,0)/m(i,t) imag at time 0 is then a reference Instead of line you can use column or disc rectangle...

like image 2
LBerger Avatar answered Nov 02 '22 18:11

LBerger


I think you can apply homomorphic filtering to each of the frames to compute the reflectance component of the frame. Then you can track the varying reflectance at selected points.

According to illumination-reflectance model of image formation, the pixel value at a given position is the product of illumination and reflectance: f(x,y) = i(x,y) . r(x,y). Illumination i tends to vary slowly across the image (or in your case, frame), and reflectance r tends to vary rapidly.

Using homomorphic filtering, you can filter out the illumination component. It takes the logarithm of the above equation, so the ln illumination and reflectance components become additive: ln(f(x,y)) = ln(i(x,y)) + ln(r(x,y)). Now, you apply a high-pass filter to retain the reflectance component (so the slowly varying illumination component is filtered out). Take a look here and here for a detailed explanation of the process with examples.

After applying the filter, you'll have the estimated reflectance frames r^(x,y,t).

like image 1
dhanushka Avatar answered Nov 02 '22 17:11

dhanushka