I seem to have found a pitfall with using .sum()
on numpy
arrays but I'm unable to find an explanation. Essentially, if I try to sum a large array then I start getting nonsensical answers but this happens silently and I can't make sense of the output well enough to Google the cause.
For example, this works exactly as expected:
a = sum(xrange(2000))
print('a is {}'.format(a))
b = np.arange(2000).sum()
print('b is {}'.format(b))
Giving the same output for both:
a is 1999000
b is 1999000
However, this does not work:
c = sum(xrange(200000))
print('c is {}'.format(c))
d = np.arange(200000).sum()
print('d is {}'.format(d))
Giving the following output:
c is 19999900000
d is -1474936480
And on an even larger array, it's possible to get back a positive result. This is more insidious because I might not identify that something unusual was happening at all. For example this:
e = sum(xrange(100000000))
print('e is {}'.format(e))
f = np.arange(100000000).sum()
print('f is {}'.format(f))
Gives this:
e is 4999999950000000
f is 887459712
I guessed that this was to do with data types and indeed even using the python float
seems to fix the problem:
e = sum(xrange(100000000))
print('e is {}'.format(e))
f = np.arange(100000000, dtype=float).sum()
print('f is {}'.format(f))
Giving:
e is 4999999950000000
f is 4.99999995e+15
I have no background in Comp. Sci. and found myself stuck (perhaps this is a dupe). Things I've tried:
numpy
arrays have a fixed size. Nope; this seems to show I should hit a MemoryError
first.sum
behaviour; nope (?) I found this but I can't see how it applies.Can someone please explain briefly what I'm missing and tell me what I need to read up on? Also, other than remembering to define a dtype
each time, is there a way to stop this happening or give a warning?
Possibly relevant:
Windows 7
numpy
1.11.3
Running out of Enthought Canopy on Python 2.7.9
On Windows (on 64-bit system too) the default integer NumPy uses if you convert from Python ints is 32-bit. On Linux and Mac it is 64-bit.
Specify a 64-bit integer and it will work:
d = np.arange(200000, dtype=np.int64).sum()
print('d is {}'.format(d))
Output:
c is 19999900000
d is 19999900000
While not most elegant, you can do some monkey patching, using functools.partial
:
from functools import partial
np.arange = partial(np.arange, dtype=np.int64)
From now on np.arange
works with 64-bit integers as default.
This is clearly numpy's integer type overflowing 32-bits. Normally you can configure numpy to fail in such situations using np.seterr
:
>>> import numpy as np
>>> np.seterr(over='raise')
{'divide': 'warn', 'invalid': 'warn', 'over': 'warn', 'under': 'ignore'}
>>> np.int8(127) + np.int8(2)
FloatingPointError: overflow encountered in byte_scalars
However, sum
is explicitly documented with the behaviour "No error is raised on overflow", so you might be out of luck here. Using numpy is often a trade-off of performance for convenience!
You can however manually specify the dtype for the accumulator, like this:
>>> a = np.ones(129)
>>> a.sum(dtype=np.int8) # will overflow
-127
>>> a.sum(dtype=np.int64) # no overflow
129
Watch ticket #593, because this is an open issue and it might be fixed by numpy devs sometime.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With