Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Array Memory Footprint versus List

I am flipping through the book Fluent Python. It states that for a sequence of all numbers, array is more efficient and faster than a List. From what I gather from this, it also has less of a memory overhead. It states "A Python array is as lean as a C array."

I am curious as to why an array here would show as having more memory than an list.

import array
from random import random
import sys

floats = array.array('d', (random() for i in range(10**7)))
L = [random() for i in range(10**7)]
print(sys.getsizeof(floats))
print(sys.getsizeof(L))

output

81940352
81528056
like image 942
bvmcode Avatar asked Apr 24 '18 12:04

bvmcode


2 Answers

Sorry, but I don't think the answer of @millimoose explains well, what is really going on or what the author meant, when he said that an array is faster than a list.

Memory footprint:

A double needs 8 bytes and that is exactly how much memory is needed if you store it in an array - it is not stored as a Python-Float, but as a raw 8-byte value. There is however a small overhead due to overallocation and some additional data saved in the array object (size of the array, size of the buffer, type of the values in the array and so on).

A Python-Float needs more than 8 bytes:

>>> import sys
>>> f=1.0
>>> sys.getsizeof(f)
24

24 bytes - quite small for a Python-object! For example an usual empty Python-object would need (Python3):

>>> class A:
      pass

>>> a=A()
>>> sys.getsizeof(a)
56

56 bytes. There are tricks to reduce the number of needed bytes and they are all used for Python-Floats but you still need 8 bytes for the double-value, another 8 Bytes for the reference counter and also 8 bytes for a pointer to the type-description (so the object know it is a Float-object).

In addition, not the object itself is stored in the list but a reference (i.e. pointer) to it, which needs 8 bytes itself. So there are basically 32 bytes needed to save a Python-float in a list - 4 times the amount of memory used.

So why do you see something different when calling sys.getsizeof for the list? The answer: sys.getsizeof is not recursive:

sys.getsizeof(object[, default])

....

Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.

That means the getsizeof for the list counts only the memory needed for the references to Float-objects (8 bytes per reference) and not the size of the objects. To illustrate that:

>>> lst=[list(range(1000))]
>>> sys.getsizeof(lst)
72

Obviously, more memory than the reported 72 bytes is used.

To see the real memory consumption you need to consider the memory needed by the interpreter:

>>> /usr/bin/time -fpeak_used_memory:%M python -c "l=list((float(i) for i in range(10**7)))"
peak_used_memory:326832
>>> /usr/bin/time -fpeak_used_memory:%M python -c "import array; a=array.array('d',(float(i) for i in range(10**7)))"
peak_used_memory:88076

As we can see the difference (320 MB vs 80MB) is about the expected factor 4.

Speed:

The author is not saying, that using array.array with python-interpreter will give you a speed up. On the contrary, using array.array with python-operations will make it slower, because first the raw double values must be converted to Python-Floats:

lst=list(range(10**6))
%timeit sum(lst)
7.19 ms ± 461 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

import array
a=array.array('i',range(10**6))
%timeit sum(a)
17.9 ms ± 43.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

almost 3 times slower!

However, there is potential to speed things up - it is just not as straightforward. For that one would use numpy, cython or numba. For example:

import numpy as np
b=np.array(range(10**6), dtype=np.int32)
%timeit b.sum()
1.07 ms ± 24.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Almost 10 times faster!

like image 179
ead Avatar answered Oct 16 '22 17:10

ead


You just picked the wrong example. The point of using array is when you need to store items whose native representation is smaller than that of a Python object reference. (Which seems to be 8 bytes here.) E.g. if you do:

from array import array
from os import urandom
a = array('B', urandom(1024))
l = list(a)
sys.getsizeof(a) # => 1155
sys.getsizeof(l) # => 9328

Since doubles are also 8 bytes wide there really isn't a more compact way to store them than a different 8 bytes.


As for the rest of the claims in the book take them with a grain of salt - you can't run Python code - that is, have operations be executed by the Python interpreter - and be as fast as C. You're still incurring overhead when writing Python objects to or reading them from the array, what would be faster is doing some sort of big operation over the entire array in a native function.

like image 3
millimoose Avatar answered Oct 16 '22 19:10

millimoose