Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is this "reduction factor" algo doing "+ div/2"

So I am running through "OpenCV 2 Computer Vision Application Programming Cookbook" by Robert Laganiere. Around page 42 it is talking about a image reduction algorithm. I understand the algorithm ( i think) but I do not understand exactly why one part was put in. I think I know why but if I am wrong I would like corrected. I am going to copy and paste a little bit of it in here:

"Color images are composed of 3-channel pixels. Each of these channels corresponds to the intensity value of one of the three primary colors (red, green, blue). Since each of these values is an 8-bit unsigned char, the total number of colors is 256x256x256, which is more than 16 million colors. Consequently, to reduce the complexity of an analysis, it is sometimes useful to reduce the number of colors in an image. One simple way to achieve this goal is to simply subdivide the RGB space into cubes of equal sizes. For example, if you reduce the number of colors in each dimension by 8, then you would obtain a total of 32x32x32 colors. Each color in the original image is then assigned a new color value in the color-reduced image that corresponds to the value in the center of the cube to which it belongs. Therefore, the basic color reduction algorithm is simple. If N is the reduction factor, then for each pixel in the image and for each channel of this pixel, divide the value by N (integer division, therefore the reminder is lost). Then multiply the result by N, this will give you the multiple of N just below the input pixel value. Just add N/2 and you obtain the central position of the interval between two adjacent multiples of N. if you repeat this process for each 8-bit channel value, then you will obtain a total of 256/N x 256/N x 256/N possible color values. How to do it... The signature of our color reduction function will be as follows: void colorReduce(cv::Mat &image, int div=64); The user provides an image and the per-channel reduction factor. Here, the processing is done in-place, that is the pixel values of the input image are modified by the function. See the There's more... section of this recipe for a more general function signature with input and output arguments. The processing is simply done by creating a double loop that goes over all pixel values: "

void colorReduce(cv::Mat &image, int div=64) {
    int nl= image.rows; // number of lines
    // total number of elements per line
    int nc= image.cols * image.channels();
    for (int j=0; j<nl; j++) {
        // get the address of row j
        uchar* data= image.ptr<uchar>(j);
        for (int i=0; i<nc; i++) {
            // process each pixel ---------------------
            data[i]=
            data[i]/div*div + div/2;// <-HERE IS WHERE I NEED UNDERSTANDING!!!
// end of pixel processing ---------------
}}}

So I get how I am reducing the 0:255 pixel value by div amount. I then lose whatever remainder was left. Then by multiplying it by the div amount again we are scaling it back up to keep it in the range of 0:255. Why are we then adding (div/2) back into the answer? The only reason I can think is that this will cause some values to be rounded down and some rounded up. If you don't use it then all your values are rounded down. So in a way it is giving a "better" average?

Don't know, so what do you guys/girls think?

like image 340
NDEthos Avatar asked May 28 '14 19:05

NDEthos


2 Answers

The easiest way to illustrate this is using an example.

For simplicity, let's say we are processing a single channel of an image. There are 256 distinct colors, ranging from 0 to 255. We are also going to use N=64 in our example.

Using these numbers, we will reduce the number of colors from 256 to 256/64 = 4. Let's draw a graph of our color space:

|......|......|......|......| 
0      63    127    191    255

The dotted line represents our colorspace, going from 0 to 255. We have split this interval into 4 parts, and the splits are represented by the vertical lines.

In order to reduce all 256 colors to 4 colors, we are going to divide each color by 64 (losing the remainder), and then multiply it by 64 again. Let's see how this goes:

[0  , 63 ] / 64 * 64 = 0
[64 , 127] / 64 * 64 = 64
[128, 191] / 64 * 64 = 128
[192, 255] / 64 * 64 = 192

As you can see, all the colors from the first part became 0, all the colors from the second part became 64, third part 128, fourth part 192. So our color space looks like this:

|......|......|......|......| 
0      63    127    191    255
|______/|_____/|_____/|_____/ 
|       |      |      |
0       64     128    192

But this is not very useful. You can see that all our colors are slanted to the left of the intervals. It would be more helpful if they were in the middle of the intervals. And that's why we add 64/2 = 32 to the values. Adding half of the interval length shifts the colors to the center of the intervals. That's also what it says in the book: "Just add N/2 and you obtain the central position of the interval between two adjacent multiples of N."

So let's add 32 to our values and see how everything looks:

[0  , 63 ] / 64 * 64 + 32 = 32
[64 , 127] / 64 * 64 + 32 = 96
[128, 191] / 64 * 64 + 32 = 160
[192, 255] / 64 * 64 + 32 = 224

And the interval looks like this:

|......|......|......|......| 
0      63    127    191    255
\______/\_____/\_____/\_____/ 
   |       |      |      |
   32      96    160    224

This is a much better color reduction. The algorithm reduced our colorspace from 256 to 4 colors, and those colors are in the middle of the intervals that they reduce.

like image 123
Ove Avatar answered Oct 03 '22 00:10

Ove


It is done to give an average of the quantization bounds, not floor of it. For example for N = 32, all data from 0 to 31 will give 16 instead of 0.

Please check following picture or my excel file. Quantization

like image 40
Virus_7 Avatar answered Oct 03 '22 01:10

Virus_7