I just noticed that the zeros
function of numpy
has a strange behavior :
%timeit np.zeros((1000, 1000))
1.06 ms ± 29.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.zeros((5000, 5000))
4 µs ± 66 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
On the other hand, ones
seems to have a normal behavior.
Is anybody know why initializing a small numpy array with the zeros
function takes more time than for a large array ?
(Python 3.5, numpy 1.11)
This looks like calloc
hitting a threshold where it makes an OS request for zeroed memory and doesn't need to initialize it manually. Looking through the source code, numpy.zeros
eventually delegates to calloc
to acquire a zeroed memory block, and if you compare to numpy.empty
, which doesn't perform initialization:
In [15]: %timeit np.zeros((5000, 5000))
The slowest run took 12.65 times longer than the fastest. This could mean that a
n intermediate result is being cached.
100000 loops, best of 3: 10 µs per loop
In [16]: %timeit np.empty((5000, 5000))
The slowest run took 5.05 times longer than the fastest. This could mean that an
intermediate result is being cached.
100000 loops, best of 3: 10.3 µs per loop
you can see that np.zeros
has no initialization overhead for the 5000x5000 array.
In fact, the OS isn't even "really" allocating that memory until you try to access it. A request for terabytes of array succeeds on a machine without terabytes to spare:
In [23]: x = np.zeros(2**40) # No MemoryError!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With