Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Problems with using a rough greyscale algorithm?

So I'm designing a few programs for editing photos in python using PIL and one of them was converting an image to greyscale (I'm avoiding the use of any functions from PIL).

The algorithm I've employed is simple: for each pixel (colour-depth is 24), I've calculated the average of the R, G and B values and set the RGB values to this average.

My program was producing greyscale images which seemed accurate, but I was wondering if I'd employed the correct algorithm, and I came across this answer to a question, where it seems that the 'correct' algorithm is to calculate 0.299 R + 0.587 G + 0.114 B.

I decided to compare my program to this algorithm. I generated a greyscale image using my program and another one (using the same input) from a website online (the top Google result for 'image to grayscale'.

To my naked eye, it seemed that they were exactly the same, and if there was any variation, I couldn't see it. However, I decided to use this website (top Google result for 'compare two images online') to compare my greyscale images. It turned out that deep in the pixels, they had slight variations, but none which were perceivable to the human eye at a first glance (differences can be spotted, but usually only when the images are laid upon each other or switched between within milliseconds).

My Questions (the first is the main question):

  1. Are there any disadvantages to using my 'rough' greyscale algorithm?
  2. Does anyone have any input images where my greyscale algorithm would produce a visibly different image to the one that would be 'correct' ?
  3. Are there any colours/RBG combinations for which my algorithm won't work as well?

My key piece of code (if needed):

def greyScale(pixelTuple):     return tuple([round(sum(pixelTuple) / 3)] * 3) 

The 'correct' algorithm (which seems to heavily weight green):

def greyScale(pixelTuple):     return tuple([round(0.299 * pixelTuple[0] + 0.587 * pixelTuple[1] + 0.114 * pixelTuple[2])] * 3) 

My input image: My input image

The greyscale image my algorithm produces: The greyscale image my algorithm produces

The greyscale image which is 'correct': The greyscale image which is 'correct'

When the greyscale images are compared online (highlighted red are the differences, using a fuzz of 10%): When the greyscale images are compared online (highlighted red are the differences, using a fuzz of 10%)

Despite the variations in pixels highlighted above, the greyscale images above appear as nearly the exact same (at least, to me).

Also, regarding my first question, if anyone's interested, this site has done some analysis on different algorithms for conversions to greyscale and also has some custom algorithms.

EDIT:

In response to @Szulat's answer, my algorithm actually produces this image instead (ignore the bad cropping, the original image had three circles but I only needed the first one):

This is what my algorithm **actually** produces

In case people are wondering what the reason for converting to greyscale is (as it seems that the algorithm depends on the purpose), I'm just making some simple photo editing tools in python so that I can have a mini-Photoshop and don't need to rely on the Internet to apply filters and effects.

Reason for Bounty: Different answers here are covering different things, which are all relevant and helpful. This makes it quite difficult to choose which answer to accept. I've started a bounty because I like a few answers listed here, but also because it'd be nice to have a single answer which covers everything I need for this question.

like image 642
Adi219 Avatar asked Aug 13 '18 08:08

Adi219


People also ask

What is the advantage of doing image processing on a grayscale image?

The main reason why grayscale representations are often used for extracting descriptors instead of operating on color images directly is that grayscale simplifies the algorithm and reduces computational requirements.

Does grayscale reduce image size?

The thing is that a grey-scale bitmap image is the same size as a color bitmap image because the data that is used to save the grey colors takes just as much space as the color. The only difference is that grey is just 3 times that same value.

What is grayscale algorithm?

The algorithm works by selecting X # of gray values, equally spread (inclusively) between zero luminance - black - and full luminance - white. The above image uses four shades of gray. Here is another example, using sixteen shades of gray: This grayscale algorithm is a bit more complex.

How grayscale is an important part of design perception?

Grayscale is an important aspect of images, and it is the only portion that is not removed; otherwise, a pure black image would result no matter what color information there is. A digital image is composed of groups of three pixels with colors of red, green and blue (RGB), also called channels in digital imaging.


1 Answers

The images look pretty similar, but your eye can tell the difference, specially if you put one in place of the other:

enter image description here

For example, you can note that the flowers in the background look brighter in the averaging conversion.

It is not that there is anything intrinsically "bad" about averaging the three channels. The reason for that formula is that we do not perceive red, green and blue equally, so their contributions to the intensities in a grayscale image shouldn't be the same; since we perceive green more intensely, green pixels should look brighter on grayscale. However, as commented by Mark there is no unique perfect conversion to grayscale, since we see in color, and in any case everyone's vision is slightly different, so any formula will just try to make an approximation so pixel intensities feel "right" for most people.

like image 149
jdehesa Avatar answered Sep 24 '22 11:09

jdehesa