Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV will not load a big image (~4GB)

I'm working on a program that is to detect colored ground control points from a rather large image. The TIFF image is some 3 - 4 GB (aboud 35 000 x 33 000 pix). I am using Python 2, and OpenCV to do the image processing.

import cv2
img = 'ortho.tif'
I = cv2.imread(img, cv2.IMREAD_COLOR)

This part does not (always) produce an error message. While showing the image does:

cv2.imshow('image', I)

I have also tried showing the image by using matplotlib:

plt.imshow(I[:, :, ::-1])  # Hack to change BGR to RGB

Is there any limitation on OpenCV or Python regarding large images? What would you suggest to get this iamge loaded?

PS: The computer I do this work on is a Windows 10 "workstation" (It has enough horsepowers to deal with the image).

In advance, thanks for your help :)

like image 553
cLupus Avatar asked Feb 27 '16 06:02

cLupus


3 Answers

The implementation of imread():

Mat imread( const string& filename, int flags )
{
    Mat img;
    imread_( filename, flags, LOAD_MAT, &img );
    return img;
}

This allocates the matrix corresponding to load an image as a contiguous array. So this depends (at least partly) on your hardware performance: your machine must be able to allocate 4 GB contiguous RAM array (if you're on a Debian distro, you may check your RAM size by running, for example, vmstat -s -SM).

By curiosity, I tried to get a contiguous memory array (a big one, but less than the one your 4 GB image requires) using ascontiguousarray, but before that, I already stumbled on a memory allocation problem:

>>> img = numpy.zeros(shape=(35000,35000))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError
>>>

In practice, even if you have enough RAM, it is not a good idea to manipulate the pixels of a 4 GB RAM image and you will need to split it anyway in terms of regions of interests, smaller areas and may be channels too, depending on the nature of the operations you want to perform on the pixels.

EDIT 1:

As I said in my comment below your answer, if you have 16GB of RAM and you're able to read that image with scikit then there is no reason you can not do the same with OpenCV.

Please give this a try:

import numpy as np # Do not forget to import numpy
import cv2    
img = cv2.imread('ortho.tif')

You forgot to import Numpy in your original code and that is why OpenCV obviously failed to load the image. All the OpenCV array structures are converted to-and-from Numpy arrays and the image you read are represented by OpenCV as arrays in the memory.

EDIT 2:

OpenCV can deal with imaes which size is up to 10 GB. But this is true when it comes cv2.imwrite() function. For cv2.imread(), however, the size of the image to read is much smaller: that is a bug announced on September 2013 (Issue3258 #1438) which is still, AFAIK, not fixed.

like image 165
Billal Begueradj Avatar answered Nov 07 '22 06:11

Billal Begueradj


It turns out that scikit-image came to the rescue, which I found out from here.

The following let me load the image into a python session:

import numpy as np
from skimage.io import imread

img = imread(path_to_file)

It took about half a minute, or so, to load.

like image 29
cLupus Avatar answered Nov 07 '22 06:11

cLupus


Used this thread to no avail.... Remove OpenCV image size limitation In summary, pip install tifffile and it will load tif files into numpy arrays which can then be used with OpenCV as per usual (but at your own risk with such large files.... OpenCV is designed with the assumption of an image less than 1 gigapixel)

like image 1
Jono_R Avatar answered Nov 07 '22 05:11

Jono_R