I am trying to save a numpy array of dimensions 128x128 pixels into a grayscale image. I simply thought that the pyplot.imsave function would do the job but it's not, it somehow converts my array into an RGB image. I tried to force the colormap to Gray during conversion but eventhough the saved image appears in grayscale, it still has a 128x128x4 dimension. Here is a code sample I wrote to show the behaviour :
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mplimg
from matplotlib import cm
x_tot = 10e-3
nx = 128
x = np.arange(-x_tot/2, x_tot/2, x_tot/nx)
[X, Y] = np.meshgrid(x,x)
R = np.sqrt(X**2 + Y**2)
diam = 5e-3
I = np.exp(-2*(2*R/diam)**4)
plt.figure()
plt.imshow(I, extent = [-x_tot/2, x_tot/2, -x_tot/2, x_tot/2])
print I.shape
plt.imsave('image.png', I)
I2 = plt.imread('image.png')
print I2.shape
mplimg.imsave('image2.png',np.uint8(I), cmap = cm.gray)
testImg = plt.imread('image2.png')
print testImg.shape
In both cases the results of the "print" function are (128,128,4).
Can anyone explain why the imsave function is creating those dimensions eventhough my input array is of a luminance type? And of course, does anyone have a solution to save the array into a standard grayscale format?
Thanks!
To save the Numpy array as a local image, use the save() function and pass the image filename with the directory where to save it. This will save the Numpy array as a jpeg image.
Displaying Grayscale imagePlotting the image as cmap = 'gray' converts the colors. All the work is done you can now see your image.
With PIL
it should work like this
import Image
I8 = (((I - I.min()) / (I.max() - I.min())) * 255.9).astype(np.uint8)
img = Image.fromarray(I8)
img.save("file.png")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With