So if I have something like x=np.random.rand(60000)*400-200
. iPython's %timeit
says:
x.astype(int)
takes 0.14msnp.rint(x)
and np.around(x)
take 1.01msNote that in the rint
and around
cases you still need to spend the extra 0.14ms to do a final astype(int)
(assuming that's what you ultimately want).
Question: am I right in thinking that most modern hardware is capable of doing both operations in equal time. If so, why is numpy taking 8 times longer for the rounding?
As it happens I'm not super fussy about the exactness of the arithmetic, but I can't see how to take advantage of that with numpy (I'm doing messy biology not particle physics).
To round elements of the array to the nearest integer, use the numpy. rint() method in Python Numpy. For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. The out is a location into which the result is stored.
Use np. floor() to round to negative infinity. Note that the data type of the return value is still a floating-point number float . If you want to convert it to an integer int , use astype() .
equivalent method. ceil , fix , floor , rint , trunc. Notes. For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc.
To truncate all the values inside an array, use the Numpy trunc() method. The trunc() method accepts two parameters and returns the array of truncated values.
np.around(x).astype(int)
and x.astype(int)
don't produce the same values. The former rounds even (it's the same as ((x*x>=0+0.5) + (x*x<0-0.5)).astype(int)
) whereas the latter rounds towards zero. However,
y = np.trunc(x).astype(int)
z = x.astype(int)
shows y==z
but calculating y
is much slower. So it's the np.trunc
and np.around
functions which are slow.
In [165]: x.dtype
Out[165]: dtype('float64')
In [168]: y.dtype
Out[168]: dtype('int64')
So np.trunc(x)
rounds towards zero from double to double. Then astype(int)
has to convert double to int64.
Internally I don't know what python or numpy are doing but I know how I would do this in C. Let's discuss some hardware. With SSE4.1 it's possible to do round, floor, ceil, and trunc from double to double using:
_mm_round_pd(a, 0); //round: round even
_mm_round_pd(a, 1); //floor: round towards minus infinity
_mm_round_pd(a, 2); //ceil: round towards positive infinity
_mm_round_pd(a, 3); //trunc: round towards zero
but numpy needs to support systems without SSE4.1 as well so it would have to build without SSE4.1 as well as with SSE4.1 and then use a dispatcher.
But to do this from double directly to int64 using SSE/AVX is not efficient until AVX512. However, it is possible to round double to int32 efficiently using only SSE2:
_mm_cvtpd_epi32(a); //round double to int32 then expand to int64
_mm_cvttpd_epi32(a); //trunc double to int32 then expand to int64
These converts two doubles to two int64.
In your case this would work fine since the range is certainly within int32. But unless python knows the range fits in int32 it can't assume this so it would have to round or trunc to int64 which is slow. Also, once again numpy would have to build to support SSE2 to do this anyway.
But maybe you could have used a single floating point array to begin with. In that case you could have done:
_mm_cvtps_epi32(a); //round single to int32
_mm_cvttps_epi32(a) //trunc single to int32
These convert four singles to four int32.
So to answer your question SSE2 can round or truncated from double to int32 efficiently. AVX512 will be able to round or truncated from double to int64 efficiently as well using _mm512_cvtpd_epi64(a)
or _mm512_cvttpd_epi64(a)
. SSE4.1 can round/trunc/floor/ceil from float to float or double to double efficiently.
As pointed out by @jme in the comments, the rint
and around
functions must work out whether to round the fractions up or down to the nearest integer. In contrast, the astype
function will always round down so it can immediately discard the decimal information. There are a number of other functions that do the same thing. Also, you could improve the speed by using a lower number of bits for the integer. However you must be careful that you can accommodate the full range of your input data.
%%timeit
np.int8(x)
10000 loops, best of 3: 165 µs per loop
Note, this does not store values outside the range -128 to 127 as it's 8-bit. Some values in your example fall outside this range.
Of all the others I tried, np.intc
seems to be the fastest:
%%timeit
np.int16(x)
10000 loops, best of 3: 186 µs per loop
%%timeit
np.intc(x)
10000 loops, best of 3: 169 µs per loop
%%timeit
np.int0(x)
10000 loops, best of 3: 170 µs per loop
%%timeit
np.int_(x)
10000 loops, best of 3: 188 µs per loop
%%timeit
np.int32(x)
10000 loops, best of 3: 187 µs per loop
%%timeit
np.trunc(x)
1000 loops, best of 3: 940 µs per loop
Your examples, on my machine:
%%timeit
np.around(x)
1000 loops, best of 3: 1.48 ms per loop
%%timeit
np.rint(x)
1000 loops, best of 3: 1.49 ms per loop
%%timeit
x.astype(int)
10000 loops, best of 3: 188 µs per loop
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With