I would like to create numpy.ndarray
objects that hold complex integer values in them. NumPy does have complex support built-in, but for floating-point formats (float
and double
) only; I can create an ndarray
with dtype='cfloat'
, for example, but there is no analogous dtype='cint16'
. I would like to be able to create arrays that hold complex values represented using either 8- or 16-bit integers.
I found this mailing list post from 2007 where someone inquired about such support. The only workaround they recommended involved defining a new dtype
that holds pairs of integers. This seems to represent each array element as a tuple of two values, but it's not clear what other work would need to be done in order to make the resulting data type work seamlessly with arithmetic functions.
I also considered another approach based on registration of user-defined types with NumPy. I don't have a problem with going to the C API to set this up if it will work well. However, the documentation for the type descriptor strucure seems to suggest that the type's kind
field only supports signed/unsigned integer, floating-point, and complex floating-point numeric types. It's not clear that I would be able to get anywhere trying to define a complex integer type.
What are some recommendations for an approach that may work?
Whatever scheme I select, it must be amenable to wrapping of existing complex integer buffers without performing a copy. That is, I would like to be able to use PyArray_SimpleNewFromData()
to expose the buffer to Python without having to make a copy of the buffer first. The buffer would be in interleaved real/imaginary format already, and would either be an array of int8_t
or int16_t
.
I also deal with lots of complex integer data, generally basebanded data.
I use
dtype = np.dtype([('re', np.int16), ('im', np.int16)])
It's not perfect, but it adequately describes the data. I use it for loading into memory without doubling the size of the data. It also has the advantage of being able to load and store transparently with HDF5.
DATATYPE H5T_COMPOUND {
H5T_STD_I16LE "re";
H5T_STD_I16LE "im";
}
Using it is straightforward and is just different.
x = np.zeros((3,3),dtype)
x[0,0]['re'] = 1
x[0,0]['im'] = 2
x
>> array([[(1, 2), (0, 0), (0, 0)],
>> [(0, 0), (0, 0), (0, 0)],
>> [(0, 0), (0, 0), (0, 0)]],
>> dtype=[('re', '<i2'), ('im', '<i2')])
To do math with it, I convert to a native complex float type. The obvious approach doesn't work, but it's also not that hard.
y = x.astype(np.complex64) # doesn't work, only gets the real part
y = x['re'] + 1.j*x['im'] # works, but slow and big
y = x.view(np.int16).astype(np.float32).view(np.complex64)
y
>> array([[ 1.+2.j, 0.+0.j, 0.+0.j],
>> [ 0.+0.j, 0.+0.j, 0.+0.j],
>> [ 0.+0.j, 0.+0.j, 0.+0.j]], dtype=complex64)
This last conversion approach was inspired by an answer to What's the fastest way to convert an interleaved NumPy integer array to complex64?
Consider using matrices of the form [[a,-b],[b,a]] as a stand-in for the complex numbers.
Ordinary multiplication and addition of matrices corresponds to addition an multiplication of complex numbers (this subring of the collection of 2x2 matrices is isomorphic to the complex numbers).
I think Python can handle integer matrix algebra.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With