Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to convert an image from np.uint16 to np.uint8?

I am creating an image so:

image = np.empty(shape=(height, width, 1), dtype = np.uint16)

After that I convert the image to BGR model:

image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)

I'd like to convert the image now in a dtype = np.uint8 in order to use that image with cv2.threshold() function. I meant, I would like to convert the image to CV_8UC1.

like image 937
omar Avatar asked Jul 05 '12 03:07

omar


3 Answers

You can use cv2.convertScaleAbs for this problem. See the Documentation.

Check out the command terminal demo below :

>>> img = np.empty((100,100,1),dtype = np.uint16)
>>> image = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)

>>> cvuint8 = cv2.convertScaleAbs(image)

>>> cvuint8.dtype
dtype('uint8')

Hope it helps!!!

like image 166
Abid Rahman K Avatar answered Oct 15 '22 21:10

Abid Rahman K


I suggest you to use this :

outputImg8U = cv2.convertScaleAbs(inputImg16U, alpha=(255.0/65535.0))

this will output a uint8 image & assign value between 0-255 with respect to there previous value between 0-65535

exemple :
pixel with value == 65535 will output with value 255 
pixel with value == 1300 will output with value 5 etc...
like image 12
afiah Avatar answered Oct 15 '22 21:10

afiah


I assume you want to rescale uint16 range 0..65535 into uint8 range 0..255. In other words, visually image will look the same, just lower color depth.

np_uint16 = np.arange(2 ** 16, dtype=np.uint16).reshape(256, 256)
np_uint8 = (np_int16 // 256).astype(np.uint8)

produces mapping:

[0..255] => 0 (256 values)
[256..511] => 1 (256 values)
... 
[65280..65535] => 255 (256 values)

Why two other answers are incorrect:

Accepted one

img = np.empty((100,100,1), dtype = np.uint16)
image = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
cvuint8 = cv2.convertScaleAbs(image)

^ will create an uninitialized uint16 array and then cast it to uint8 taking abs() and clipping values >255 to 255. Resulting mapping:

[0..254] => [0..254]
[255..65535] => 255

Next one:

outputImg8U = cv2.convertScaleAbs(inputImg16U, alpha=(255.0/65535.0))

^ produces slightly wrong mapping:

[0..128] => 0 (129 values)
[129..385] => 1 (257 values)
...
[65407..65535] => 255 (129 values)

so 2,..,254 bins get one extra value at 0 and 255's expense. It's a bit tricky to get precise mapping with convertScaleAbs, since it uses round half down:

np_int16 = np.arange(2 ** 16, dtype=np.uint16).reshape(256, 256)
np_uint8 = cv2.convertScaleAbs(np_int16, alpha = 1./256., beta=-.49999)

ps: if you care about performance, OpenCV one is ~50% faster than Numpy on my Mac.

like image 3
apatsekin Avatar answered Oct 15 '22 22:10

apatsekin