The following is my function to convert RGB to gray scale image.
My input image is 32*32*3 where as the output dimension looks like 32*32, But i am looking for 32*32*1. Do I need to resize or re scale this image.
Any thoughts?
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
If you want the conversion to happen in the Tensorflow graph itself you can use this function: https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_grayscale
tf.image.rgb_to_grayscale(input_images)
Also, looks like you are answering your own question. What is wrong with
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
Good luck!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With