I frequently convert 16-bit grayscale image data to 8-bit image data for display. It's almost always useful to adjust the minimum and maximum display intensity to highlight the 'interesting' parts of the image.
The code below does roughly what I want, but it's ugly and inefficient, and makes many intermediate copies of the image data. How can I achieve the same result with a minimum memory footprint and processing time?
import numpy
image_data = numpy.random.randint( #Realistic images would be much larger
low=100, high=14000, size=(1, 5, 5)).astype(numpy.uint16)
display_min = 1000
display_max = 10000.0
print(image_data)
threshold_image = ((image_data.astype(float) - display_min) *
(image_data > display_min))
print(threshold_image)
scaled_image = (threshold_image * (255. / (display_max - display_min)))
scaled_image[scaled_image > 255] = 255
print(scaled_image)
display_this_image = scaled_image.astype(numpy.uint8)
print(display_this_image)
What you are doing is halftoning your image.
The methods proposed by others work great, but they are repeating a lot of expensive computations over and over again. Since in a uint16
there are at most 65,536 different values, using a look-up table (LUT) can streamline things a lot. And since the LUT is small, you don't have to worry that much about doing things in place, or not creating boolean arrays. The following code reuses Bi Rico's function to create the LUT:
import numpy as np
import timeit
rows, cols = 768, 1024
image = np.random.randint(100, 14000,
size=(1, rows, cols)).astype(np.uint16)
display_min = 1000
display_max = 10000
def display(image, display_min, display_max): # copied from Bi Rico
# Here I set copy=True in order to ensure the original image is not
# modified. If you don't mind modifying the original image, you can
# set copy=False or skip this step.
image = np.array(image, copy=True)
image.clip(display_min, display_max, out=image)
image -= display_min
np.floor_divide(image, (display_max - display_min + 1) / 256,
out=image, casting='unsafe')
return image.astype(np.uint8)
def lut_display(image, display_min, display_max) :
lut = np.arange(2**16, dtype='uint16')
lut = display(lut, display_min, display_max)
return np.take(lut, image)
>>> np.all(display(image, display_min, display_max) ==
lut_display(image, display_min, display_max))
True
>>> timeit.timeit('display(image, display_min, display_max)',
'from __main__ import display, image, display_min, display_max',
number=10)
0.304813282062
>>> timeit.timeit('lut_display(image, display_min, display_max)',
'from __main__ import lut_display, image, display_min, display_max',
number=10)
0.0591987428298
So there is a x5 speed-up, which is not a bad thing, I guess...
I would avoid casting the image to float, you could do something like:
import numpy as np
def display(image, display_min, display_max):
# Here I set copy=True in order to ensure the original image is not
# modified. If you don't mind modifying the original image, you can
# set copy=False or skip this step.
image = np.array(image, copy=True)
image.clip(display_min, display_max, out=image)
image -= display_min
image //= (display_min - display_max + 1) / 256.
image = image.astype(np.uint8)
# Display image
Here an optional copy of the image is made in it's native data type and an 8 bit copy is make on the last line.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With