From what I've read about Numpy arrays, they're more memory efficient that standard Python lists. What confuses me is that when you create a numpy array, you have to pass in a python list. I assume this python list gets deconstructed, but to me, it seems like it defeats the purpose of having a memory efficient data structure if you have to create a larger inefficient structure to create the efficient one.
Does numpy.zeros get around this?
There are many ways to create a NumPy array. Passing a Python list to np.array
or np.asarray
is just one such way.
Another way is to use an iterator:
In [11]: np.fromiter(xrange(10), count=10, dtype='float')
Out[11]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
In this case, there is no big temporary Python list involved. So instead of building a Python list, you could define a generator function which yields the items in the list. Then to create the array you'd pass the generator to np.fromiter
. Since np.fromiter
always creates a 1D array, to create higher dimensional arrays use reshape
on the returned value.
There is also np.fromfunction
, np.frombuffer
, np.fromfile
, np.loadtxt
, np.genfromtxt
, np.fromstring
, np.zeros
, np.empty
and np.ones
. These all provide ways to create NumPy arrays without creating large temporary Python objects.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With