I am trying to read and tile a jp2 image file. The image is RGB 98176 x 80656 pixels (it is medical image data).
When trying to read the image with glymur I get this error:
glymur.lib.openjp2.OpenJPEGLibraryError: OpenJPEG library error: Prevent buffer overflow (x1: 80656, y1: 98176)
I understand the image is too big. What I need is to read the image data by tiles and save them elsewhere and in another format.
Glymur allows me to read the header using python, so for instance, the code stream is:
>>> print(codestream.segment[1])
SIZ marker segment @ (87, 47)
Profile: no profile
Reference Grid Height, Width: (98176 x 80656)
Vertical, Horizontal Reference Grid Offset: (0 x 0)
Reference Tile Height, Width: (832 x 1136)
Vertical, Horizontal Reference Tile Offset: (0 x 0)
Bitdepth: (8, 8, 8)
Signed: (False, False, False)
Vertical, Horizontal Subsampling: ((1, 1), (1, 1), (1, 1))
Tiling doesnt work, the read method doesn't work.
Edit:
I tried also Scipy which is able to read the header but the same thing, errors that arise are:
>>> import scipy.misc
>>> image=scipy.misc.imread('Sl0.jp2')
/home/user/anaconda2/lib/python2.7/site-packages/PIL/Image.py:2274: DecompressionBombWarning: Image size (7717166080 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
DecompressionBombWarning)
>>> scipy.misc.imwrite('/home/user/Documents/imageCfromjp2.tif',image)
/home/user/
AttributeError: 'module' object has no attribute 'imwrite'
>>> scipy.misc.imsave('/home/user/Documents/imageCfromjp2.tif',image)
/home/user/
File "/home/user/anaconda2/lib/python2.7/site-packages/scipy/misc/pilutil.py", line 195, in imsave
im = toimage(arr, channel_axis=2)
File "/home/user/anaconda2/lib/python2.7/site-packages/scipy/misc/pilutil.py", line 287, in toimage
raise ValueError("'arr' does not have a suitable array shape for "
ValueError: 'arr' does not have a suitable array shape for any mode.
>>> image2=image[0:500,0:500]
/home/user/
IndexError: too many indices for array
>>> image2=image[0:500]
/home/user/
ValueError: cannot slice a 0-d array
Is there any way to stream the image data into a different type of container so that the number of indices is not an issue and enables me to process it?
I'm facing the same problems at them moment using files from a slide scanner.
What I found useful was tiling the image using vips
and openslide
with the following command:
vips dzsave image.mrxs targetdirectoryname --depth one --tile-size 2048 --overlap 0
This will output tiles of level 0 (full resolution) of the source image with tile-size of your choosing and pixel overlap of 0 to the target directory.
The standard thing for reading huge medical images is openslide, I'd try that first. I'm not sure it will read jp2 directly, but assuming this is from a slide scanner, perhaps you could save in one of the formats that openslide supports?
ImageMagick will load sections of large jp2 images via OpenJPEG, though it's not especially quick. I have a 10k x 10k jp2 image here, for example, and if I convert to JPG I see:
$ time convert sekscir25.jp2 x.jpg
real 0m25.378s
user 0m24.832s
sys 0m0.544s
If I try to crop out a small piece, it's hardly any quicker, suggesting that IM always decodes the entire image:
$ time convert sekscir25.jp2 -crop 100x100+0+0 x.png
real 0m19.887s
user 0m19.380s
sys 0m0.504s
But if I do the crop during load, it does speed up:
$ time convert sekscir25.jp2[100x100+0+0] x.png
real 0m7.026s
user 0m6.748s
sys 0m0.276s
Not great, but it might work if you're patient.
are you try it using openslide.
import openslide
from openslide.deepzoom import DeepZoomGenerator
osr=openslide.OpenSlide('JP2.svs')
im=osr.get_thumbnail((200,200))
im.save('test.jpg')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With