I am loading image with the following code
image = PIL.Image.open(file_path) image = np.array(image)
It works, but the size of array appears to be (X, X, 4)
, i.e. it has 4 layers. I would like normal RGB layers. Is it possible?
UPDATE
I found that just removing 4th channel is unsufficcient. The following code was required:
image = PIL.Image.open(file_path) image.thumbnail(resample_size) image = image.convert("RGB") image = np.asarray(image, dtype=np.float32) / 255 image = image[:, :, :3]
Why?
Using OpenCV Library imread() function is used to load the image and It also reads the given image (PIL image) in the NumPy array format. Then we need to convert the image color from BGR to RGB. imwrite() is used to save the image in the file.
But for PIL, the input is RGB, while it's BGR for cv2.
The fourth layer is the transparency value for image formats that support transparency, like PNG. If you remove the 4th value it'll be a correct RGB image without transparency.
EDIT:
Example:
>>> import PIL.Image >>> image = PIL.Image.open('../test.png') >>> import numpy as np >>> image = np.array(image) >>> image.shape (381, 538, 4) >>> image[...,:3].shape (381, 538, 3)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With