I am working with numpy arrays of a range of data types (uint8, uint16, int16, etc.). I would like to be able to check whether a number can be represented within the limits of an array for a given datatype. I am imagining something that looks like:
>>> im.dtype dtype('uint16') >>> dtype_max(im.dtype) 65535 >>> dtype_min(im.dtype) 0
Does something like this exist? By the way, I feel like this has to have been asked before, but my search came up empty, and all of the "similar questions" appear to be unrelated.
Edit: Of course, now that I've asked, one of the "related" questions does have the answer. Oops.
maximum() function is used to find the element-wise maximum of array elements. It compares two arrays and returns a new array containing the element-wise maxima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned.
There is no general maximum array size in numpy. Of course there is, it is the size of np. intp datatype. Which for 32bit versions may only be 32bits...
The data type can be specified using a string, like 'f' for float, 'i' for integer etc. or you can use the data type directly like float for float and int for integer.
That is, the value range [0,255*256] is mapped to [0,255].
min_value = np.iinfo(im.dtype).min max_value = np.iinfo(im.dtype).max
docs:
np.iinfo
(machine limits for integer types)np.finfo
(machine limits for floating point types)You're looking for numpy.iinfo
for integer types. Documentation here.
There's also numpy.finfo
for floating point types. Documentation here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With