I am looking for an efficient way to define a variable-size array working with the numpy module, knowing that the performance can be achieved only with the fixed-size arrays.
My solution is to define a list (or tuple) as:
test = [0] * 10
and then define each element of "test" with a numpy array as:
test[0] = numpy.array([1,2,3])
test[1] = numpy.array([4,5])
...
I guess this may lead to a scatter memory addressing when using such variable-size array, although vector-computing is maintained for each element of "test" as before.
So, I would appreciate some help, if there is an efficient way around to define such arrays with the numpy module?
You could create one large allocation behind the scenes, and create a bunch of smaller views:
backing = np.empty(10, np.int32)
my_data = [
backing[0:2],
backing[2:5],
backing[5:10]
]
my_data[0][...] = [1, 3]
my_data[1][...] = [1, 3, 5]
my_data[2][...] = [1, 3, 5, 7, 9]
print(my_data)
# [array([1, 3]), array([1, 3, 5]), array([1, 3, 5, 7, 9])]
This has the nice bonus that you can still operate on the flat array if needed:
backing += 1 # add to all sublists
print(my_data)
# [array([2, 4]), array([2, 4, 6]), array([ 2, 4, 6, 8, 10])]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With