I'm looking at a third-party lib that has the following if
-test:
if isinstance(xx_, numpy.ndarray) and xx_.dtype is numpy.float64 and xx_.flags.contiguous: xx_[:] = ctypes.cast(xx_.ctypes._as_parameter_,ctypes.POINTER(ctypes.c_double))
It appears that xx_.dtype is numpy.float64
always fails:
>>> xx_ = numpy.zeros(8, dtype=numpy.float64) >>> xx_.dtype is numpy.float64 False
What is the correct way to test that the dtype
of a numpy array is float64
?
A data type object (an instance of numpy. dtype class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: Type of the data (integer, float, Python object, etc.)
You can use np. issubdtype(some_dtype, np. integer) to test if a dtype is an integer dtype.
dtype tells the data type of the elements of a NumPy array. In NumPy array, all the elements have the same data type. itemsize returns the size (in bytes) of each element of a NumPy array.
This is a bug in the lib.
dtype
objects can be constructed dynamically. And NumPy does so all the time. There's no guarantee anywhere that they're interned, so constructing a dtype
that already exists will give you the same one.
On top of that, np.float64
isn't actually a dtype
; it's a… I don't know what these types are called, but the types used to construct scalar objects out of array bytes, which are usually found in the type
attribute of a dtype
, so I'm going to call it a dtype.type
. (Note that np.float64
subclasses both NumPy's numeric tower types and Python's numeric tower ABCs, while np.dtype
of course doesn't.)
Normally, you can use these interchangeably; when you use a dtype.type
—or, for that matter, a native Python numeric type—where a dtype
was expected, a dtype
is constructed on the fly (which, again, is not guaranteed to be interned), but of course that doesn't mean they're identical:
>>> np.float64 == np.dtype(np.float64) == np.dtype('float64') True >>> np.float64 == np.dtype(np.float64).type True
The dtype.type
usually will be identical if you're using builtin types:
>>> np.float64 is np.dtype(np.float64).type True
But two dtype
s are often not:
>>> np.dtype(np.float64) is np.dtype('float64') False
But again, none of that is guaranteed. (Also, note that np.float64
and float
use the exact same storage, but are separate types. And of course you can also make a dtype('f8')
, which is guaranteed to work the same as dtype(np.float64)
, but that doesn't mean 'f8'
is
, or even ==
, np.float64
.)
So, it's possible that constructing an array by explicitly passing np.float64
as its dtype
argument will mean you get back the same instance when you check the dtype.type
attribute, but that isn't guaranteed. And if you pass np.dtype('float64')
, or you ask NumPy to infer it from the data, or you pass a dtype string for it to parse like 'f8'
, etc., it's even less likely to match. More importantly, you definitely not get np.float64
back as the dtype
itself.
So, how should it be fixed?
Well, the docs define what it means for two dtype
s to be equal, and that's a useful thing, and I think it's probably the useful thing you're looking for here. So, just replace the is
with ==
:
if isinstance(xx_, numpy.ndarray) and xx_.dtype == numpy.float64 and xx_.flags.contiguous:
However, to some extent I'm only guessing that's what you're looking for. (The fact that it's checking the contiguous flag implies that it's probably going to go right into the internal storage… but then why isn't it checking C vs. Fortran order, or byte order, or anything else?)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With