I am trying to work with the np.longdouble
dtype in my Python code, and am trying to use NumPy to manipulate long doubles that I get from a C module compiled with Cython.
Suppose I do this:
import numpy as np
print np.finfo(np.longdouble)
Machine parameters for float128
---------------------------------------------------------------------
precision= 18 resolution= 1e-18
machep= -63 eps= 1.08420217249e-19
negep = -64 epsneg= 5.42101086243e-20
minexp=-16382 tiny= 3.36210314311e-4932
maxexp= 16384 max= 1.18973149536e+4932
nexp = 15 min= -max
---------------------------------------------------------------------
a = np.longdouble(1e+346)
a
Out[4]: inf
b = np.longdouble(1e+347)
b
Out[6]: inf
c = a/b
/usr/lib/python2.7/site-packages/spyderlib/widgets/externalshell/start_ipython_kernel.py:1:
RuntimeWarning: invalid value encountered in longdouble_scalars
# -*- coding: utf-8 -*-
c
Out[8]: nan
a.dtype, b.dtype, c.dtype
Out[9]: (dtype('float128'), dtype('float128'), dtype('float128'))
In essence, it is linked to the same issue as in this question and I understand that Python first converts the 1e+346
into a float, whose represntation would be inf
. However, can someone suggest a workaround? Is there a way to create NumPy longdoubles that are not converted to floats first?
I have a C module that can output long doubles, which I want to use in a numpy array of dtype np.longdouble
.
Even if the solution involves re-compiling Python/NumPy, I am willing to try it.
A data type object (an instance of numpy. dtype class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: Type of the data (integer, float, Python object, etc.)
Yes, a numpy array can store different data String, Integer, Complex, Float, Boolean.
There are a few things you might want to take into account.
First, this is a mess. NumPy knows longdouble
and float128
. Unfortunately, the names are misleading, the underlying implementation is C long double, which is usually (but not necessarily always) an 80-bit float. (Actually you can see it here by looking at "precision"; 18 digits is approximately 60 bits, and the 80-bit float has 64 bits in its mantissa. The precision would be around 34 digits if real 128-bit floats were used.)
There may not be any direct way to pass long doubles as arguments to a C function, but if you pass pointers instead, then you can avoid the problem. For example, you may pass your array data as uint8
(by using myarray.view(dtype='uint8')
) and the cast the pointer to the buffer into long double * in your C program. At least then Python has nothing to do with type conversions. (Most probably you do not need to take the view
because you are, after all, only exporting a pointer to the array buffer.)
Just beware that this trick relies on the compiler having the same kind of type settings when compiling Python and your C program. In addition to precision differences, there may be byte order differences (rarely if the programs run in the same machine) and alignment differences. My Python seems to align the longdouble
items at 16 byte borders (i.e. there is always 6 bytes of zeros per each element), but the C compiler may use 10/12/16 byte alignment.
As far as I know, the details are implementation specific. So, this is doable but requires some extra care and there may be portability issues.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With