What's the best way to use Numpy to convert a size (x, y, 3) array of rgb pixel values to a size (x, y, 1) array of grayscale pixel values?
I have a function, rgbToGrey(rgbArray) that can take the [r,g,b] array and return the greyscale value. I'd like to use it along with Numpy to shrink the 3rd dimension of my array from size 3 to size 1.
How can I do this?
Note: This would be pretty easy if I had the original image and could grayscale it first using Pillow, but I don't have it.
UPDATE:
The function I was looking for was np.dot()
.
From the answer to this quesiton:
Assuming we convert rgb into greyscale through the formula:
.3r * .6g * .1b = grey,
we can do np.dot(rgb[...,:3], [.3, .6, .1])
to get what I'm looking for, a 2d array of grey-only values.
Use numpy. dot() to convert an image from RGB to grayscaleimage. imread(fname) to get a NumPy array representing an image named fname . Call numpy. dot(a, b) with a as array[...,:3] and b as [0.2989, 0.5870, 0.1140] to convert the previous result array to grayscale.
You just have to take the average of three colors. Since its an RGB image, so it means that you have add r with g with b and then divide it by 3 to get your desired grayscale image. Its done in this way.
See the answers in another thread.
Essentially:
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With