Currently, I have a 3D Python list in jagged array format.A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
Is there any way I could convert this list to a NumPy array, in order to use certain NumPy array operators such as adding a number to each element.A + 4
would give [[[4, 4, 4], [4, 4, 4], [4, 4, 4]], [[4], [4], [4]]]
.
Assigning B = numpy.array(A)
then attempting to B + 4
throws a type error.TypeError: can only concatenate list (not "float") to list
Is a conversion from a jagged Python list to a NumPy array possible while retaining the structure (I will need to convert it back later) or is looping through the array and adding the required the better solution in this case?
The answers by @SonderingNarcissit and @MadPhysicist are already quite nice.
Here is a quick way of adding a number to each element in your list and keeping the structure. You can replace the function return_number
by anything you like, if you want to not only add a number but do something else with it:
def return_number(my_number):
return my_number + 4
def add_number(my_list):
if isinstance(my_list, (int, float)):
return return_number(my_list)
else:
return [add_number(xi) for xi in my_list]
A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
Then
print(add_number(A))
gives you the desired output:
[[[4, 4, 4], [4, 4, 4], [4, 4, 4]], [[4], [4], [4]]]
So what it does is that it look recursively through your list of lists and everytime it finds a number it adds the value 4; this should work for arbitrarily deep nested lists. That currently only works for numbers and lists; if you also have e.g. also dictionaries in your lists then you would have to add another if-clause.
Since numpy can only work with regular-shaped arrays, it checks that all the elements of a nested iterable are the same length for a given dimension. If they are not, it still creates an array, but of type np.object
instead of np.int
as you would expect:
>>> B = np.array(A)
>>> B
array([[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0], [0], [0]]], dtype=object)
In this case, the "objects" are lists. Addition is defined for lists, but only in terms of other lists that extend the original, hence your error. [0, 0] + 4
is an error, while [0, 0] + [4]
is [0, 0, 4]
. Neither is what you want.
It may be interesting that numpy will make the object portion of your array nest as low as possible. Array you created is actually a 2D numpy array containing lists, not a 1D array containing nested lists:
>>> B[0, 0]
[0, 0, 0]
>>> B[0, 0, 0]
Traceback (most recent call last):
File "<ipython-input-438-464a9bfa40bf>", line 1, in <module>
B[0, 0, 0]
IndexError: too many indices for array
As you pointed out, you have two options when it comes to ragged arrays. The first is to pad the array so it is non-ragged, convert it to numpy, and only use the elements you care about. This does not seem very convenient in your case.
The other method is to apply functions to your nested array directly. Luckily for you, I wrote a snippet/recipe in response to this question, which does exactly what you need, down to being able to support arbitrary levels of nesting and your choice of operators. I have upgraded it here to accept non-iterable nested elements anywhere along the list, including the original input and do a primitive form of broadcasting:
from itertools import repeat
def elementwiseApply(op, *iters):
def isIterable(x):
"""
This function is also defined in numpy as `numpy.iterable`.
"""
try:
iter(x)
except TypeError:
return False
return True
def apply(op, *items):
"""
Applies the operator to the given arguments. If any of the
arguments are iterable, the non-iterables are broadcast by
`itertools.repeat` and the function is applied recursively
on each element of the zipped result.
"""
elements = []
count = 0
for iter in items:
if isIterable(iter):
elements.append(iter)
count += 1
else:
elements.append(itertools.repeat(iter))
if count == 0:
return op(*items)
return [apply(op, *items) for items in zip(*elements)]
return apply(op, *iters)
This is a pretty general solution that will work with just about any kind of input. Here are a couple of sample runs showing how it is relevant to your question:
>>> from operator import add
>>> elementwiseApply(add, 4, 4)
8
>>> elementwiseApply(add, [4, 0], 4)
[8, 4]
>>> elementwiseApply(add, [(4,), [0, (1, 3, [1, 1, 1])]], 4)
[[8], [4, [5, 7, [5, 5, 5]]]]
>>> elementwiseApply(add, [[0, 0, 0], [0, 0], 0], [[4, 4, 4], [4, 4], 4])
[[4, 4, 4], [4, 4], 4]
>>> elementwiseApply(add, [(4,), [0, (1, 3, [1, 1, 1])]], [1, 1, 1])
[[5], [1, [2, 4, [2, 2, 2]]]]
The result is always a new list or scalar, depending on the types of the inputs. The number of inputs must be the number accepted by the operator. operator.add
always takes two inputs, for example.
Looping and adding is likely better, since you want to preserve the structure of the original. Plus, the error you mentioned indicates that you would need to flatten the numpy array and then add to each element. Although numpy operations tend to be faster than list operations, converting, flattening, and reverting is cumbersome and will probably offset any gains.
It we turn your list into an array, we get a 2d array of objects
In [1941]: A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
In [1942]: A = np.array(A)
In [1943]: A.shape
Out[1943]: (2, 3)
In [1944]: A
Out[1944]:
array([[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0], [0], [0]]], dtype=object)
When I try A+1
it iterates over the elements of A
and tries to do +1
for each. In the case of numeric array it can do that in fast compiled code. With an object array it has to invoke the +
operation for each element.
In [1945]: A+1
...
TypeError: can only concatenate list (not "int") to list
Lets try that again with a flat iteration over A
:
In [1946]: for a in A.flat:
...: print(a+1)
....
TypeError: can only concatenate list (not "int") to list
The elements of A
are lists; +
for a list is a concatenate:
In [1947]: for a in A.flat:
...: print(a+[1])
...:
[0, 0, 0, 1]
[0, 0, 0, 1]
[0, 0, 0, 1]
[0, 1]
[0, 1]
[0, 1]
If the elements of A
were themselves arrays, I think the +1
would work.
In [1956]: for i, a in np.ndenumerate(A):
...: A[i]=np.array(a)
...:
In [1957]: A
Out[1957]:
array([[array([0, 0, 0]), array([0, 0, 0]), array([0, 0, 0])],
[array([0]), array([0]), array([0])]], dtype=object)
In [1958]: A+1
Out[1958]:
array([[array([1, 1, 1]), array([1, 1, 1]), array([1, 1, 1])],
[array([1]), array([1]), array([1])]], dtype=object)
And to get back to the pure list form, we have apply tolist
to both the elements of the object array and to the array itself:
In [1960]: A1=A+1
In [1961]: for i, a in np.ndenumerate(A1):
...: A1[i]=a.tolist()
In [1962]: A1
Out[1962]:
array([[[1, 1, 1], [1, 1, 1], [1, 1, 1]],
[[1], [1], [1]]], dtype=object)
In [1963]: A1.tolist()
Out[1963]: [[[1, 1, 1], [1, 1, 1], [1, 1, 1]], [[1], [1], [1]]]
This a rather round about way of adding a value to all elements of nested lists. I could have done that with one iteration:
In [1964]: for i,a in np.ndenumerate(A):
...: A[i]=[x+1 for x in a]
...:
In [1965]: A
Out[1965]:
array([[[1, 1, 1], [1, 1, 1], [1, 1, 1]],
[[1], [1], [1]]], dtype=object)
So doing math on object arrays is hit and miss. Some operations do propagate to the elements, but even those depend on how the elements behave.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With