Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

High performance array mean

I've got a performance bottleneck. I'm computing the column-wise mean of large arrays (250 rows & 1.3 million columns), and I do so more than a million times in my application.

My test case in Python:

import numpy as np
big_array = np.random.random((250, 1300000))
%timeit mean = big_array.mean(axis = 0) # ~400 milliseconds

Numpy takes around 400 milliseconds on my machine, running on a single core. I've tried several other matrix libraries across different languages (Cython, R, Julia, Torch), but found only Julia to beat Numpy, by taking around 250 milliseconds.

Can anyone provide evidence of substantial improvements in performance in this task? Perhaps this is a task suited for the GPU?

Edit: My application is evidently memory-constrained, and its performance is dramatically improved by accessing elements of a large array only once, rather than repeatedly. (See comment below.)

like image 813
Carlos De la Guardia Avatar asked Apr 01 '16 08:04

Carlos De la Guardia


People also ask

What is faster array or list?

The array is faster in case of access to an element while List is faster in case of adding/deleting an element from the collection.

Why are arrays more efficient than lists?

An array is faster than a list in python since all the elements stored in an array are homogeneous i.e., they have the same data type whereas a list contains heterogeneous elements. Moreover, Python arrays are implemented in C which makes it a lot faster than lists that are built-in in Python itself.

Is List slower than array?

Slower Search Time:Linked list have slower search times than arrays as random access is not allowed. Unlike arrays where the elements can be search by index, linked list require iteration.

Why are arrays faster than list C#?

I already knew that arrays were faster than lists, I believe because arrays are sequentially lined up in memory where as lists are scattered about; my knowledge of memory usage in code isn't amazing.


1 Answers

Julia, if I'm not mistaken, uses fortran ordering in memory as opposed to numpy which uses C memory layout by default. So if you rearrange things to adhere to the same layout so that the mean is happening along contiguous memory, you get better performance:

In [1]: import numpy as np

In [2]: big_array = np.random.random((250, 1300000))

In [4]: big_array_f = np.asfortranarray(big_array)

In [5]: %timeit mean = big_array.mean(axis = 0)
1 loop, best of 3: 319 ms per loop

In [6]: %timeit mean = big_array_f.mean(axis = 0)
1 loop, best of 3: 205 ms per loop

Or you can just change you dimensions and take the mean over the other axis:

In [10]: big_array = np.random.random((1300000, 250))

In [11]: %timeit mean = big_array.mean(axis = 1)
1 loop, best of 3: 205 ms per loop
like image 107
JoshAdel Avatar answered Oct 30 '22 09:10

JoshAdel