I'm struggling with understanding how opencv interprets numpy arrays.
import cv2
import numpy as np
if __name__ == '__main__':
size = (w, h, channels) = (100, 100, 1)
img = np.zeros(size, np.int8)
cv2.imshow('result', img), cv2.waitKey(0)
cv2.destroyAllWindows()
Grayscale black 100x100 image, right? No, it's showing me gray! Why's that?
Ok, the crucial part is dtype. I've chosen np.int8
. When I use np.uint8
, it is black.
Suprisingly, when dtype=np.int8
, zeros are interpreted as 127(or 128)!
I expected that zero is still zero, no matter if it is signed or unsigned.
For a BGR image,
img = np.zeros([height, width, 3], dtype=np.uint8)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With