I get this exception for a reason I do not understand. It is quite complicated, where my np.array v comes from, but here is the code when the exception occurs:
print v, type(v)
for val in v:
print val, type(val)
print "use isfinte() with astype(float64): "
np.isfinite(v.astype("float64"))
print "use isfinite() as usual: "
try:
np.isfinite(v)
except Exception,e:
print e
This gives the following output:
[6.4441947744288255 7.2246449651781788 4.1028442021807656
4.8832943929301189] <type 'numpy.ndarray'>
6.44419477443 <type 'numpy.float64'>
7.22464496518 <type 'numpy.float64'>
4.10284420218 <type 'numpy.float64'>
4.88329439293 <type 'numpy.float64'>
np.isfinte() with astype(float64):
[ True True True True]
np.isfinte() as usual:
ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
I do not understand the TypeError. All the elements are np.float64 and should be fine. Maybe a bug? This Error does only occure sometimes, but I can't find differences between the arrays. The always have the same type.
Thanks in advance.
EDIT: Working example:
Data Structures are as small as shown above.
import pandas as pd
import numpy as np
def forward_estim(H,end):
old_idx = H.index
new_idx = pd.period_range(old_idx[-1],end,freq=old_idx.freq)
H_estim = pd.DataFrame(columns=["A","B","C","D"],index=new_idx)
H_chg = H.values[1:]-H.values[:-1]
mean_ = H_chg.mean()
std_ = H_chg.std()
H_estim.ix[0] = H.ix[-1]
for i in range(1,len(H_estim)):
H_estim.A[i] = H_estim.A[i-1] + mean_ + std_/2
H_estim.B[i] = H_estim.B[i-1] + mean_ + std_
H_estim.C[i] = H_estim.C[i-1] + mean_ - std_
H_estim.D[i] = H_estim.D[i-1] + mean_ - std_/2
return H_estim.ix[1:]
H_idx = pd.period_range("2010-01-01","2012-01-01",freq="A")
print H_idx
H = pd.Series(np.array([2.3,3.0,2.9]),index=H_idx)
print H
H_estim = forward_estim(H,"2014-01-01")
print H_estim
np.isfinite(H_estim.values.astype("float64"))
print "This works!"
np.isfinite(H_estim.values)
print "This does not work!"
This is run here using:
MacOsX Mavericks, Python 2.7.6, numpy 1.8.1, pandas 0.13.1
H_estim.values
is a numpy array with the data type object
(take a look at H_estim.values.dtype
):
In [62]: H_estim.values
Out[62]:
array([[3.4000000000000004, 3.6000000000000005, 2.7999999999999998, 3.0],
[3.9000000000000004, 4.3000000000000007, 2.6999999999999993,
3.0999999999999996]], dtype=object)
In [63]: H_estim.values.dtype
Out[63]: dtype('O')
In an object
array, the data stored in the array's memory are pointers to python objects, not the objects themselves. In this case, the objects are np.float64
instances:
In [65]: H_estim.values[0,0]
Out[65]: 3.4000000000000004
In [66]: type(H_estim.values[0,0])
Out[66]: numpy.float64
So in many respects, this array looks and acts like an array of np.float64
values, but it is not the same. In particular, the numpy ufuncs (including np.isfinite
) don't handle object arrays.
H_estim.values.astype(np.float64)
converts the array to one with data type np.float64
(i.e. an array where the array elements are the actual floating point values, not pointers to objects). Compare the following to the output shown above for H_estim.values
.
In [70]: a = H_estim.values.astype(np.float64)
In [71]: a
Out[71]:
array([[ 3.4, 3.6, 2.8, 3. ],
[ 3.9, 4.3, 2.7, 3.1]])
In [72]: a.dtype
Out[72]: dtype('float64')
You assume that "All the elements are np.float64 and should be fine.". However, this likely is not the case. How large is the data structure? Can you look at all values and find something suspicious? From http://matplotlib.1069221.n5.nabble.com/type-error-with-python-3-2-and-version-1-1-1-of-matplotlib-numpy-error-td38784.html we see that this problem might appear with Decimal
data types. Is there a way for you to create a minimal working example that reproduces the issue? It should be possible, and when you create this example, it will most likely already pinpoint the issue.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With