Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to display 16-bit 4096 intensity image in Python openCV?

I have images encoded in grayscale 16-bit tiff format. They use a variant of 16-bit color depth where the max intensity is 4,096.

I believe the default max intensity in openCV is 65,536, so my image shows up as black using the following code.

import cv2

image = cv2.imread("test.tif", -1)

cv2.imshow('tiff', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
print(image)

enter image description here

I can use vmin and vmax in matplotlib to configure the color mapping:

import cv2
import matplotlib.pyplot as plt 

image = cv2.imread("test.tif", -1)
plt.imshow(image, cmap="gray", vmin=0, vmax=4096)
plt.show()

It shows the content of the image:

enter image description here

The reason why I want to stick with openCV is matplotlib doesn't support displaying 16-bit RGB images.

The documentation of cv2.imshow is not really helpful. Are there ways to display 16-bit 4096 intensity images in Python openCV?

The testing image test.tif can be found here.

like image 423
Jay Wong Avatar asked May 29 '18 19:05

Jay Wong


People also ask

How do you find the intensity of an image in Python?

To get the mean intensity of the entire image, simply call ndimage dot mean() with the original volume. If you provide a mask or a labeled array, you will restrict the analysis to all non-zero pixels. However, if you provide a set of labels and an index value, you can get the mean intensity for a single label.

How do I know if my image is 8 bit or 16 bit python?

You can check it by checking the data-type. If you are getting a numpy array in python, you can check the data-type with my_array. dtype . You probably get either a uint8 (8 bit/ 1 byte per colour channel) or uint16 (16 bit / 2 bytes per colour channel) in rare cases.

How do I find the resolution of an image in OpenCV?

When working with OpenCV Python, images are stored in numpy ndarray. To get the image shape or size, use ndarray. shape to get the dimensions of the image. Then, you can use index on the dimensions variable to get width, height and number of channels for each pixel.


1 Answers

You'll want to use cv2.normalize() to scale the image before displaying.

You can set the min/max of the image and it will scale the image appropriately (by moving the min of the image to alpha and max of the image to beta). Supposing your img is already a uint16:

img_scaled = cv2.normalize(img, dst=None, alpha=0, beta=65535, norm_type=cv2.NORM_MINMAX)

And then you can view as normal.

By default, cv2.normalize() will result in an image the same type as your input image, so if you want an unsigned 16-bit result, your input should be uint16.


Again, note that this linearly stretches your image range---if your image never actually hit 0 and say the lowest value was 100, after you normalize, that lowest value will be whatever you set alpha to. If you don't want that, as one of the comments suggests, you can simply multiply your image by 16, since it's currently only going up to 4095. With * 16, it will go up to 65535.

like image 73
alkasm Avatar answered Sep 22 '22 19:09

alkasm