I'd like to use .ftr files to quickly analyze hundreds of tables. Unfortunately I have some problems with decimal and thousands separator, similar to that post, just that read_feather does not allow for decimal=',', thousands='.'
options. I've tried the following approaches:
df['numberofx'] = (
df['numberofx']
.apply(lambda x: x.str.replace(".","", regex=True)
.str.replace(",",".", regex=True))
resulting in
AttributeError: 'str' object has no attribute 'str'
when I change it to
df['numberofx'] = (
df['numberofx']
.apply(lambda x: x.replace(".","").replace(",","."))
I receive some strange (rounding) mistakes in the results, like 22359999999999998 instead of 2236 for some numbers that are higher than 1k. All below 1k are 10 times the real result, which is probably because of deleting the "." of the float and creating an int of that number.
Trying
df['numberofx'] = df['numberofx'].str.replace('.', '', regex=True)
also leads to some strange behavior in the results, as some numbers are going in the 10^12 and others remain at 10^3 as they should.
Here is how I create my .ftr files from multiple Excel files. I know I could simply create DataFrames from the Excel files but that would slowdown my daily calculations to much.
How can I solve that issue?
EDIT: The issue seems to come from reading in an excel file as df with non US standard regarding decimal and thousands separator and than saving it as feather. using pd.read_excel(f, encoding='utf-8', decimal=',', thousands='.')
options for reading in the excel file solved my issue. That leads to the next question:
why does saving floats in a feather file lead to strange rounding errors like changing 2.236 to 2.2359999999999998?
the problem in your code is that :
when you check your column type in dataframe ( Panda ) you gonna find :
df.dtypes['numberofx']
result : type object
so suggested solution is to try :
df['numberofx'] = df['numberofx'].apply(pd.to_numeric, errors='coerce')
Another way to fix this problem is to convert your values to float :
def coerce_to_float(val):
try:
return float(val)
except ValueError:
return val
df['numberofx']= df['numberofx'].applymap(lambda x: coerce_to_float(x))
to avoid that type of float '4.806105e+12' here is a sample Sample :
df = pd.DataFrame({'numberofx':['4806105017087','4806105017087','CN414149']})
print (df)
ID
0 4806105017087
1 4806105017087
2 CN414149
print (pd.to_numeric(df['numberofx'], errors='coerce'))
0 4.806105e+12
1 4.806105e+12
2 NaN
Name: ID, dtype: float64
df['numberofx'] = pd.to_numeric(df['numberofx'], errors='coerce').fillna(0).astype(np.int64)
print (df['numberofx'])
ID
0 4806105017087
1 4806105017087
2 0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With