Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Numpy longdouble arithmetic does not seem to be in long double with conversion

I have been playing C99's quad precision long double. It is my understanding that (platform specific) numpy supports long double and 128bit floats.

I have run across something I cannot explain however.

Given:

>>> import numpy as np

Calculate a number that will require more than 64 bits but less than 128 bits to represent as an integer:

>>> 2**64+2
18446744073709551618          # note the '8' at the end
>>> int(2**64+2)
18446744073709551618          # same obviously

If I calculate the same number in C99 128 bit long double, I get 18446744073709551618.000000

Now, if I use numpy long double:

>>> a=np.longdouble(2)
>>> b=np.longdouble(64)
>>> a**b+a
18446744073709551618.0              # all good...

What about these incorrect results:

>>> np.longdouble(2**64+2)
18446744073709551616.0             # Note '6'; appears 2**64 not done in long double
>>> np.longdouble(int(2**64+2))
18446744073709551616.0             # can't force the use of a Python long
>>> n=int(2**64+2)
>>> np.longdouble(n)
18446744073709551616.0
>>> np.longdouble(18446744073709551618)
18446744073709551616.0             # It really does not want to do '8' at the end

But, this works:

>>> np.longdouble(2**64)+2
18446744073709551618.0

Question: Does numpy have issues converting values correctly into long doubles? Is there something I am doing incorrect?

like image 948
dawg Avatar asked Aug 30 '13 15:08

dawg


1 Answers

You're trying to perform a type conversion between non-directly-convertible types. Take a look at the stack:

#0  0x00002aaaaab243a0 in PyLong_AsDouble ()
   from libpython2.7.so.1.0
#1  0x00002aaaaab2447a in ?? ()
   from libpython2.7.so.1.0
#2  0x00002aaaaaaf8357 in PyNumber_Float ()
   from libpython2.7.so.1.0
#3  0x00002aaaae71acdc in MyPyFloat_AsDouble (obj=0x2aaaaae93c00)
    at numpy/core/src/multiarray/arraytypes.c.src:40
#4  0x00002aaaae71adfc in LONGDOUBLE_setitem (op=0x2aaaaae93c00, 
    ov=0xc157b0 "", ap=0xbf6ca0)
    at numpy/core/src/multiarray/arraytypes.c.src:278
#5  0x00002aaaae705c82 in PyArray_FromAny (op=0x2aaaaae93c00, 
    newtype=0x2aaaae995960, min_depth=<value optimized out>, max_depth=0, 
    flags=0, context=<value optimized out>)
    at numpy/core/src/multiarray/ctors.c:1664
#6  0x00002aaaae7300ad in longdouble_arrtype_new (type=0x2aaaae9938a0, 
    args=<value optimized out>, __NPY_UNUSED_TAGGEDkwds=<value optimized out>)
    at numpy/core/src/multiarray/scalartypes.c.src:2545

As you can see, the Python long (unlimited-precision integer) 2**64 + 2 is being converted to float (i.e. 64-bit double), which loses precision; the float is then used to initialise the long double but the precision has already been lost.

The problem is that 128-bit double is not a native Python type, so long doesn't have a native conversion to it, only to 64-bit double. It probably would be possible for NumPy to detect this situation and perform its own conversion using the long C API, but might be fairly complicated for relatively little benefit (you can just do arithmetic in np.longdouble from the start).

like image 61
ecatmur Avatar answered Sep 18 '22 12:09

ecatmur