Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Strange behavior during array allocation in numpy [duplicate]

Tags:

python

numpy

I just noticed that the zeros function of numpy has a strange behavior :

%timeit np.zeros((1000, 1000))
1.06 ms ± 29.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit np.zeros((5000, 5000))
4 µs ± 66 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

On the other hand, ones seems to have a normal behavior. Is anybody know why initializing a small numpy array with the zeros function takes more time than for a large array ?

(Python 3.5, numpy 1.11)

like image 669
Ipse Lium Avatar asked Nov 18 '22 00:11

Ipse Lium


1 Answers

This looks like calloc hitting a threshold where it makes an OS request for zeroed memory and doesn't need to initialize it manually. Looking through the source code, numpy.zeros eventually delegates to calloc to acquire a zeroed memory block, and if you compare to numpy.empty, which doesn't perform initialization:

In [15]: %timeit np.zeros((5000, 5000))
The slowest run took 12.65 times longer than the fastest. This could mean that a
n intermediate result is being cached.
100000 loops, best of 3: 10 µs per loop

In [16]: %timeit np.empty((5000, 5000))
The slowest run took 5.05 times longer than the fastest. This could mean that an
 intermediate result is being cached.
100000 loops, best of 3: 10.3 µs per loop

you can see that np.zeros has no initialization overhead for the 5000x5000 array.

In fact, the OS isn't even "really" allocating that memory until you try to access it. A request for terabytes of array succeeds on a machine without terabytes to spare:

In [23]: x = np.zeros(2**40)  # No MemoryError!
like image 146
user2357112 supports Monica Avatar answered Nov 24 '22 00:11

user2357112 supports Monica