I am completely puzzled by this.
From the following
import numpy as np
a = np.array([4, -9])
a[0] = 0.4
a
I expected output: array([ 0.4, -9])
. But it gives me
array([ 0, -9])
.
But when I changed the dtype
to f
a = np.array([4, -9], 'f')
a[0] = 0.4
a
It gives me the expected out put of array([ 0.40000001, -9. ], dtype=float32)
The documentation for numpy.array(object, dtype=None, copy=True, order='K', subok=False, ndmin=0)
says:
dtype : data-type, optional The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects in the sequence. This argument can only be used to ‘upcast’ the array. For downcasting, use the .astype(t) method.
When I initialized the array it initialized the values to integers
and so when I indexed the array with a float
it only recognized the integer
part of 0.4
and hence gave me 0
. This is how I understand it. Is this correct?. But I am still surprised by this behavior.
Question: What exactly is going on here?
The problem is that your array is of dtype=np.int64
:
In [141]: a = np.array([4, -9])
In [142]: a.dtype
Out[142]: dtype('int64')
This means that you can only store integers, and any floats are truncated before assignment is done. If you want to store floats and ints together, you should specify dtype=object
first:
In [143]: a = np.array([4, -9], dtype=object)
In [144]: a[0] = 0.4
In [145]: a
Out[145]: array([0.4, -9], dtype=object)
As for the issue with array([ 0.40000001, -9. ]
, 0.4
, as a floating point number does not have an exact representation in memory (only an approximate one), which accounts for the imprecision you see.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With