In which case using objects like numpy.r_ or numpy.c_ is better (more efficient, more suitable) than using fonctions like concatenate or vstack for example ?
I am trying to understand a code where the programmer wrote something like:
return np.r_[0.0, 1d_array, 0.0] == 2
where 1d_array
is an array whose values can be 0, 1 or 2. Why not using np.concatenate (for example) instead ? Like :
return np.concatenate([[0.0], 1d_array, [0.0]]) == 2
It is more readable and apparently it does the same thing.
r_ Translates slice objects to concatenation along the first axis. This is a simple way to build up arrays quickly.
append() and np. concatenate(). The append method will add an item to the end of an array and the Concatenation function will allow us to add two arrays together. In concatenate function the input can be any dimension while in the append function all input must be of the same dimension.
For example, NumPy concatenate is a very flexible tool for combining together NumPy arrays, either vertically or horizontally. And then there's NumPy hstack, which enables you to combine together arrays horizontally.
Your answer is certainly faster than the method given in the question, but much slower than the best practices for numpy. It really is a clever way to merge pairs of arrays in the minimum number of operations, but concatenate accepts lists of any length so you aren't limited to pairs.
np.r_
is implemented in the numpy/lib/index_tricks.py
file. This is pure Python code, with no special compiled stuff. So it is not going to be any faster than the equivalent written with concatenate
, arange
and linspace
. It's useful only if the notation fits your way of thinking and your needs.
In your example it just saves converting the scalars to lists or arrays:
In [452]: np.r_[0.0, np.array([1,2,3,4]), 0.0] Out[452]: array([ 0., 1., 2., 3., 4., 0.])
error with the same arguments:
In [453]: np.concatenate([0.0, np.array([1,2,3,4]), 0.0]) ... ValueError: zero-dimensional arrays cannot be concatenated
correct with the added []
In [454]: np.concatenate([[0.0], np.array([1,2,3,4]), [0.0]]) Out[454]: array([ 0., 1., 2., 3., 4., 0.])
hstack
takes care of that by passing all arguments through [atleast_1d(_m) for _m in tup]
:
In [455]: np.hstack([0.0, np.array([1,2,3,4]), 0.0]) Out[455]: array([ 0., 1., 2., 3., 4., 0.])
So at least in simple cases it is most similar to hstack
.
But the real usefulness of r_
comes when you want to use ranges
np.r_[0.0, 1:5, 0.0] np.hstack([0.0, np.arange(1,5), 0.0]) np.r_[0.0, slice(1,5), 0.0]
r_
lets you use the :
syntax that is used in indexing. That's because it is actually an instance of a class that has a __getitem__
method. index_tricks
uses this programming trick several times.
They've thrown in other bells-n-whistles
Using an imaginary
step, uses np.linspace
to expand the slice rather than np.arange
.
np.r_[-1:1:6j, [0]*3, 5, 6]
produces:
array([-1. , -0.6, -0.2, 0.2, 0.6, 1. , 0. , 0. , 0. , 5. , 6. ])
There are more details in the documentation.
I did some time tests for many slices in https://stackoverflow.com/a/37625115/901925
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With