Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fastest approach to read thousands of images into one big numpy array

Tags:

I'm trying to find the fastest approach to read a bunch of images from a directory into a numpy array. My end goal is to compute statistics such as the max, min, and nth percentile of the pixels from all these images. This is straightforward and fast when the pixels from all the images are in one big numpy array, since I can use the inbuilt array methods such as .max and .min, and the np.percentile function.

Below are a few example timings with 25 tiff-images (512x512 pixels). These benchmarks are from using %%timit in a jupyter-notebook. The differences are too small to have any practical implications for just 25 images, but I am intending to read thousands of images in the future.

# Imports import os import skimage.io as io import numpy as np 
  1. Appending to a list

    %%timeit imgs = []     img_path = '/path/to/imgs/' for img in os.listdir(img_path):         imgs.append(io.imread(os.path.join(img_path, img)))     ## 32.2 ms ± 355 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 
  2. Using a dictionary

    %%timeit     imgs = {}     img_path = '/path/to/imgs/'     for img in os.listdir(img_path):         imgs[num] = io.imread(os.path.join(img_path, img))     ## 33.3 ms ± 402 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 

For the list and dictionary approaches above, I tried replacing the loop with a the respective comprehension with similar results time-wise. I also tried preallocating the dictionary keys with no significant difference in the time taken. To get the images from a list to a big array, I would use np.concatenate(imgs), which only takes ~1 ms.

  1. Preallocating a numpy array along the first dimension

    %%timeit     imgs = np.ndarray((512*25,512), dtype='uint16')     img_path = '/path/to/imgs/'     for num, img in enumerate(os.listdir(img_path)):         imgs[num*512:(num+1)*512, :] = io.imread(os.path.join(img_path, img))     ## 33.5 ms ± 804 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 
  2. Preallocating a numpy along the third dimension

    %%timeit     imgs = np.ndarray((512,512,25), dtype='uint16')     img_path = '/path/to/imgs/'     for num, img in enumerate(os.listdir(img_path)):         imgs[:, :, num] = io.imread(os.path.join(img_path, img))     ## 71.2 ms ± 2.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 

I initially thought the numpy preallocation approaches would be faster, since there is no dynamic variable expansion in the loop, but this does not seem to be the case. The approach that I find the most intuitive is the last one, where each image occupies a separate dimensions along the third axis of the array, but this is also the slowest. The additional time taken is not due to the preallocation itself, which only takes ~ 1 ms.

I have three question regarding this:

  1. Why is the numpy preallocation approaches not faster than the dictionary and list solutions?
  2. Which is the fastest way to read in thousands of images into one big numpy array?
  3. Could I benefit from looking outside numpy and scikit-image, for an even faster module for reading in images? I tried plt.imread(), but the scikit-image.io module is faster.
like image 560
joelostblom Avatar asked May 19 '17 20:05

joelostblom


People also ask

What is faster than NumPy?

pandas provides a bunch of C or Cython optimized functions that can be faster than the NumPy equivalent function (e.g. reading text from text files). If you want to do mathematical operations like a dot product, calculating mean, and some more, pandas DataFrames are generally going to be slower than a NumPy array.

Is appending to NumPy array faster than list?

NumPy Arrays Are NOT Always Faster Than Lists " append() " adds values to the end of both lists and NumPy arrays.

Is NumPy array faster?

NumPy Arrays are faster than Python Lists because of the following reasons: An array is a collection of homogeneous data-types that are stored in contiguous memory locations. On the other hand, a list in Python is a collection of heterogeneous data types stored in non-contiguous memory locations.

What is the maximum size of NumPy array?

There is no general maximum array size in numpy. Of course there is, it is the size of np. intp datatype. Which for 32bit versions may only be 32bits...


2 Answers

Part A : Accessing and assigning NumPy arrays

Going by the way elements are stored in row-major order for NumPy arrays, you are doing the right thing when storing those elements along the last axis per iteration. These would occupy contiguous memory locations and as such would be the most efficient for accessing and assigning values into. Thus initializations like np.ndarray((512*25,512), dtype='uint16') or np.ndarray((25,512,512), dtype='uint16') would work the best as also mentioned in the comments.

After compiling those as funcs for testing on timings and feeding in random arrays instead of images -

N = 512 n = 25 a = np.random.randint(0,255,(N,N))  def app1():     imgs = np.empty((N,N,n), dtype='uint16')     for i in range(n):         imgs[:,:,i] = a         # Storing along the first two axes     return imgs  def app2():     imgs = np.empty((N*n,N), dtype='uint16')     for num in range(n):             imgs[num*N:(num+1)*N, :] = a         # Storing along the last axis     return imgs  def app3():     imgs = np.empty((n,N,N), dtype='uint16')     for num in range(n):             imgs[num,:,:] = a         # Storing along the last two axes     return imgs  def app4():     imgs = np.empty((N,n,N), dtype='uint16')     for num in range(n):             imgs[:,num,:] = a         # Storing along the first and last axes     return imgs 

Timings -

In [45]: %timeit app1()     ...: %timeit app2()     ...: %timeit app3()     ...: %timeit app4()     ...:  10 loops, best of 3: 28.2 ms per loop 100 loops, best of 3: 2.04 ms per loop 100 loops, best of 3: 2.02 ms per loop 100 loops, best of 3: 2.36 ms per loop 

Those timings confirm the performance theory proposed at the start, though I expected the timings for the last setup to have timings in between the ones for app3 and app1, but maybe the effect of going from last to the first axis for accessing and assigning isn't linear. More investigations on this one could be interesting (follow up question here).

To claify schematically, consider that we are storing image arrays, denoted by x (image 1) and o (image 2), we would have :

App1 :

[[[x 0]   [x 0]   [x 0]   [x 0]   [x 0]]   [[x 0]   [x 0]   [x 0]   [x 0]   [x 0]]   [[x 0]   [x 0]   [x 0]   [x 0]   [x 0]]] 

Thus, in memory space, it would be : [x,o,x,o,x,o..] following row-major order.

App2 :

[[x x x x x]  [x x x x x]  [x x x x x]  [o o o o o]  [o o o o o]  [o o o o o]] 

Thus, in memory space, it would be : [x,x,x,x,x,x...o,o,o,o,o..].

App3 :

[[[x x x x x]   [x x x x x]   [x x x x x]]   [[o o o o o]   [o o o o o]   [o o o o o]]] 

Thus, in memory space, it would be same as previous one.


Part B : Reading image from disk as arrays

Now, the part on reading image, I have seen OpenCV's imread to be much faster.

As a test, I downloaded Mona Lisa's image from wiki page and tested performance on image reading -

import cv2 # OpenCV  In [521]: %timeit io.imread('monalisa.jpg') 100 loops, best of 3: 3.24 ms per loop  In [522]: %timeit cv2.imread('monalisa.jpg') 100 loops, best of 3: 2.54 ms per loop 
like image 95
Divakar Avatar answered Sep 17 '22 10:09

Divakar


In this case, most of the time will be spent reading the files from disk, and I wouldn't worry too much about the time to populate a list.

In any case, here is a script comparing four method, without the overhead of reading an actual image from disk, but just read an object from memory.

import numpy as np import time from functools import wraps   x, y = 512, 512 img = np.random.randn(x, y) n = 1000   def timethis(func):     @wraps(func)     def wrapper(*args, **kwargs):         start = time.perf_counter()         r = func(*args, **kwargs)         end = time.perf_counter()         print('{}.{} : {} milliseconds'.format(func.__module__, func.__name__, (end - start)*1e3))         return r     return wrapper   @timethis def static_list(n):     imgs = [None]*n     for i in range(n):         imgs[i] = img     return imgs   @timethis def dynamic_list(n):     imgs = []     for i in range(n):         imgs.append(img)     return imgs   @timethis def list_comprehension(n):     return [img for i in range(n)]   @timethis def numpy_flat(n):     imgs = np.ndarray((x*n, y))     for i in range(n):         imgs[x*i:(i+1)*x, :] = img  static_list(n) dynamic_list(n) list_comprehension(n) numpy_flat(n) 

The results show:

__main__.static_list : 0.07004200006122119 milliseconds __main__.dynamic_list : 0.10294799994881032 milliseconds __main__.list_comprehension : 0.05021800006943522 milliseconds __main__.numpy_flat : 309.80870099983804 milliseconds 

Obviously your best bet is list comprehension, however even with populating a numpy array, its just 310 ms for reading 1000 images (from memory). So again, the overhead will be the disk read.

Why numpy is slower?

It is the way numpy stores array in memory. If we modify the python list functions to convert the list to a numpy array, the times are similar.

The modified functions return values:

@timethis def static_list(n):     imgs = [None]*n     for i in range(n):         imgs[i] = img     return np.array(imgs)   @timethis def dynamic_list(n):     imgs = []     for i in range(n):         imgs.append(img)     return np.array(imgs)   @timethis def list_comprehension(n):     return np.array([img for i in range(n)]) 

and the timing results:

__main__.static_list : 303.32892100022946 milliseconds __main__.dynamic_list : 301.86925499992867 milliseconds __main__.list_comprehension : 300.76925699995627 milliseconds __main__.numpy_flat : 305.9309459999895 milliseconds 

So it is just a numpy thing that it takes more time, and it is constant value relative to array size...

like image 36
Gerges Avatar answered Sep 17 '22 10:09

Gerges