I'm doing some image processing and I need an automatic white balancing algorithm that's not too intensive in terms of CPU computation time. Any recommendations?
EDIT: and if it's relevant to efficiency, I'll be implementing it in Java with color images as an array of integers.
Automatic white balance (AWB) algorithms try to correct for the ambient light with minimum input from the user, so that the resulting image looks like what our eyes would see. Automatic white balancing is done in two steps: Step 1: Estimate the scene illuminant. Step 2: Correct the color balance of the image.
White balance (WB) is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in your photo.
By pointing the camera at a white (or gray) card, filling the screen completely with it, then pressing the White Balance button (or set it in the menu), the camera does its WB calculation.
“RAW allows you to adjust the white balance in post-production effectively,” says Waltz. Aim for consistent lighting. Shooting photos with mismatched sources of light will make it more difficult to edit the white balance in post-production. “Try to get your light sources to match,” says Waltz.
GIMP apparently uses a very simple algorithm for automatic white balancing. http://docs.gimp.org/en/gimp-layer-white-balance.html
The White Balance command automatically adjusts the colors of the active layer by stretching the Red, Green and Blue channels separately. To do this, it discards pixel colors at each end of the Red, Green and Blue histograms which are used by only 0.05% of the pixels in the image and stretches the remaining range as much as possible. The result is that pixel colors which occur very infrequently at the outer edges of the histograms (perhaps bits of dust, etc.) do not negatively influence the minimum and maximum values used for stretching the histograms, in comparison with Stretch Contrast. Like “Stretch Contrast”, however, there may be hue shifts in the resulting image.
There is a bit more tweaking than is described here since my first attempt at implementing this works seems to work for most photos but other photos seem to have artifacts or contain too much of either red green or blue :/
A relatively simple algorithm is to average the hues (in HSV or HSL) of the brightest and darkest pixels on the screen. In a pinch, go with the brightest pixel only. If the hues between brightest and darkest are too different, go with the bright pixel. If the dark is near black go with the bright pixel.
Why even look at the dark pixel? Sometimes the dark is not near black, and hints at the ambient light or fog or haze.
This will make sense to you if you're a heavy Photoshop user. Highlights in a photo are unrelated (or weakly related) to the underlying color of the object. They are your best representation of the color cast of the light, unless the image is so overexposed that everything has overwhelmed the CCDs.
Then adjust the hues of all pixels.
You'll need fast RGB to HSV and HSV to RGB functions. (But maybe you can work in RGB for the pixel corrections with a LUT or linear interpolation.)
You don't want to go by average pixel color or most popular color. That way lies madness.
To quickly find the brightest color (and the darkest one), you can work in RGB, but you should have multipliers for green, red, and blue. On an RGB monitor, 255 green is brighter than 255 red which is brighter than 255 blue. I used to have good multipliers in my head, but alas, they have fled my memory. You can probably google for them.
This will fail in an image which has no highlights. A matte painted wall, for example. But I don't know what you can do about that.
There are many improvements to make to this simple algorithm. You can average multiple bright pixels, grid the image and grab bright and dark pixels from each cell, etc. You'll find some obvious tweaks after implementing the algorithm.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With