Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Want to understand why dithering algorithm can decrease color depth?

Sometimes I have a true colored image, by using dithering algorithm, I can reduce the color to just 256. I want to know how the dithering algorithm achieve this.

I understand that dithering can reduce the error, but how can the algorithm decrease color depth, especially from true color to just 256 colors or even less.

like image 688
Jack Lee Avatar asked Dec 27 '22 06:12

Jack Lee


2 Answers

Dithering simulates a higher color depth by "mixing" the colors in a defined palette to create the illusion of a color that isn't really there. In reality, it's doing the same thing that your computer monitor is already doing: taking a color, decomposing it into primary colors, and displaying those right next to each other. Your computer monitor does it with variable-intensity red, green, and blue, while dithering does it with a set of fixed-intensity colors. Since your eye has limited resolution, it sums the inputs, and you perceive the average color.

In the same way, a newspaper can print images in grayscale by dithering the black ink. They don't need lots of intermediate gray colors to get a decent grayscale image; they simply use smaller or larger dots of black ink on the page.

When you dither an image, you lose information, but your eye perceives it in largely the same way. In this sense, it's a little like JPEG or other lossy compression algorithms which discard information that your eye can't see.

like image 157
Steven Bell Avatar answered Jan 01 '23 15:01

Steven Bell


Dithering by itself does not decrease the number of colors. Rather, dithering is applied during the process of reducing the colors to make the artifacts of the color reduction less visible.

A color that is halfway between two other colors can be simulated by a pattern that is half of one color and half of the other. This can be generalized to other percentages as well. A color that is a mixture of 10% of one color and 90% of the other can be simulated by having 10% of the pixels be the first color and 90% of the pixels be the second. This is because the eye will tend to consider the random variations as noise and average them into the overall impression of the color of an area.

The most effective dithering algorithms will track the difference between the original image and the color-reduced one, and account for that difference while converting future pixels. This is called error diffusion - the errors on the current pixel are diffused into the conversions of other pixels.

The process of selecting the best 256 colors for the conversion is separate from dithering.

like image 22
Mark Ransom Avatar answered Jan 01 '23 15:01

Mark Ransom