I have a set of image files, and I want to reduce the number of colors of them to 64. How can I do this with OpenCV?
I need this so I can work with a 64-sized image histogram. I'm implementing CBIR techniques
What I want is color quantization to a 4-bit palette.
Use Photoshop > main menu > Image > Mode > Indexed Color to open window for color reduction. 2. Select no dither, no transparency and set required number of colors. Select any type of palette that works best for your image.
Changing Color-space For color conversion, we use the function cv. cvtColor(input_image, flag) where flag determines the type of conversion. For HSV, hue range is [0,179], saturation range is [0,255], and value range is [0,255].
OpenCV is used in many real-time applications also. OpenCV has some built-in functions to perform Color detection and Segmentation operations.
RGB Image : Pixel intensities in this color space are represented by values ranging from 0 to 255 for single channel. Thus, number of possibilities for one color represented by a pixel is 16 million approximately [255 x 255 x 255 ].
You might consider K-means, yet in this case it will most likely be extremely slow. A better approach might be doing this "manually" on your own. Let's say you have image of type CV_8UC3
, i.e. an image where each pixel is represented by 3 RGB values from 0 to 255 (Vec3b
). You might "map" these 256 values to only 4 specific values, which would yield 4 x 4 x 4
= 64
possible colors.
I've had a dataset, where I needed to make sure that dark = black, light = white and reduce the amount of colors of everything between. This is what I did (C++):
inline uchar reduceVal(const uchar val) { if (val < 64) return 0; if (val < 128) return 64; return 255; } void processColors(Mat& img) { uchar* pixelPtr = img.data; for (int i = 0; i < img.rows; i++) { for (int j = 0; j < img.cols; j++) { const int pi = i*img.cols*3 + j*3; pixelPtr[pi + 0] = reduceVal(pixelPtr[pi + 0]); // B pixelPtr[pi + 1] = reduceVal(pixelPtr[pi + 1]); // G pixelPtr[pi + 2] = reduceVal(pixelPtr[pi + 2]); // R } } }
causing [0,64)
to become 0
, [64,128)
-> 64
and [128,255)
-> 255
, yielding 27
colors:
To me this seems to be neat, perfectly clear and faster than anything else mentioned in other answers.
You might also consider reducing these values to one of the multiples of some number, let's say:
inline uchar reduceVal(const uchar val) { if (val < 192) return uchar(val / 64.0 + 0.5) * 64; return 255; }
which would yield a set of 5 possible values: {0, 64, 128, 192, 255}
, i.e. 125 colors.
This subject was well covered on OpenCV 2 Computer Vision Application Programming Cookbook:
Chapter 2 shows a few reduction operations, one of them demonstrated here in C++ and later in Python:
#include <iostream> #include <vector> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> void colorReduce(cv::Mat& image, int div=64) { int nl = image.rows; // number of lines int nc = image.cols * image.channels(); // number of elements per line for (int j = 0; j < nl; j++) { // get the address of row j uchar* data = image.ptr<uchar>(j); for (int i = 0; i < nc; i++) { // process each pixel data[i] = data[i] / div * div + div / 2; } } } int main(int argc, char* argv[]) { // Load input image (colored, 3-channel, BGR) cv::Mat input = cv::imread(argv[1]); if (input.empty()) { std::cout << "!!! Failed imread()" << std::endl; return -1; } colorReduce(input); cv::imshow("Color Reduction", input); cv::imwrite("output.jpg", input); cv::waitKey(0); return 0; }
Below you can find the input image (left) and the output of this operation (right):
The equivalent code in Python would be the following: (credits to @eliezer-bernart)
import cv2 import numpy as np input = cv2.imread('castle.jpg') # colorReduce() div = 64 quantized = input // div * div + div // 2 cv2.imwrite('output.jpg', quantized)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With