When saving a Pandas DataFrame to csv, some integers are getting converted in floats.
It happens where a column of floats has missing values (np.nan
).
Is there a simple way to avoid it? (Especially in an automatic way - I often deal with many columns of various data types.)
For example
import pandas as pd
import numpy as np
df = pd.DataFrame([[1,2],[3,np.nan],[5,6]],
columns=["a","b"],
index=["i_1","i_2","i_3"])
df.to_csv("file.csv")
yields
,a,b
i_1,1,2.0
i_2,3,
i_3,5,6.0
What I would like to get is
,a,b
i_1,1,2
i_2,3,
i_3,5,6
EDIT: I am fully aware of Support for integer NA - Pandas Caveats and Gotchas. The question is what is a nice workaround (especially in case if there are many other columns of various types and I do not know in advance which "integer" columns have missing values).
Using float_format = '%.12g'
inside the to_csv function solved a similar problem for me. It keeps the decimals for legitimate floats with up to 12 significant digits, but drops them for ints being forced to floats by the presence of NaN's:
In [4]: df
Out[4]:
a b
i_1 1 2.0
i_2 3 NaN
i_3 5.9 6.0
In [5]: df.to_csv('file.csv', float_format = '%.12g')
Output is:
, a, b
i_1, 1, 2
i_2, 3,
i_3, 5.9, 6
This snippet does what you want and should be relatively efficient at doing it.
import numpy as np
import pandas as pd
EPSILON = 1e-9
def _lost_precision(s):
"""
The total amount of precision lost over Series `s`
during conversion to int64 dtype
"""
try:
return (s - s.fillna(0).astype(np.int64)).sum()
except ValueError:
return np.nan
def _nansafe_integer_convert(s):
"""
Convert Series `s` to an object type with `np.nan`
represented as an empty string ""
"""
if _lost_precision(s) < EPSILON:
# Here's where the magic happens
as_object = s.fillna(0).astype(np.int64).astype(np.object)
as_object[s.isnull()] = ""
return as_object
else:
return s
def nansafe_to_csv(df, *args, **kwargs):
"""
Write `df` to a csv file, allowing for missing values
in integer columns
Uses `_lost_precision` to test whether a column can be
converted to an integer data type without losing precision.
Missing values in integer columns are represented as empty
fields in the resulting csv.
"""
df.apply(_nansafe_integer_convert).to_csv(*args, **kwargs)
We can test this with a simple DataFrame which should cover all bases:
In [75]: df = pd.DataFrame([[1,2, 3.1, "i"],[3,np.nan, 4.0, "j"],[5,6, 7.1, "k"]]
columns=["a","b", "c", "d"],
index=["i_1","i_2","i_3"])
In [76]: df
Out[76]:
a b c d
i_1 1 2 3.1 i
i_2 3 NaN 4.0 j
i_3 5 6 7.1 k
In [77]: nansafe_to_csv(df, 'deleteme.csv', index=False)
Which produces the following csv
file:
a,b,c,d
1,2,3.1,i
3,,4.0,j
5,6,7.1,k
I'm expanding the sample data here to hopefully make sure this is handling the situations you are dealing with:
df = pd.DataFrame([[1.1,2,9.9,44,1.0],
[3.3,np.nan,4.4,22,3.0],
[5.5,8,np.nan,66,4.0]],
columns=list('abcde'),
index=["i_1","i_2","i_3"])
a b c d e
i_1 1.1 2 9.9 44 1
i_2 3.3 NaN 4.4 22 3
i_3 5.5 8 NaN 66 4
df.dtypes
a float64
b float64
c float64
d int64
e float64
I think if you want a general solution, it's going to have to be explicitly coded due to pandas not allowing NaNs in int columns. What I do below here is check for integers values (since we can't really check the type as they will have been recast to float if they contain NaNs), and if it's an integer value then convert to a string format and also convert 'NAN'
to ''
(empty). Of course, this is not how you want to store the integers except as a final step before outputting.
for col in df.columns:
if any( df[col].isnull() ):
tmp = df[col][ df[col].notnull() ]
if all( tmp.astype(int).astype(float) == tmp.astype(float) ):
df[col] = df[col].map('{:.0F}'.format).replace('NAN','')
df.to_csv('x.csv')
Here's the output file and also what it looks like if you read it back into pandas although the purpose of this is presumably to read it into other numerical packages.
%more x.csv
,a,b,c,d,e
i_1,1.1,2,9.9,44,1.0
i_2,3.3,,4.4,22,3.0
i_3,5.5,8,,66,4.0
pd.read_csv('x.csv')
Unnamed: 0 a b c d e
0 i_1 1.1 2 9.9 44 1
1 i_2 3.3 NaN 4.4 22 3
2 i_3 5.5 8 NaN 66 4
@EdChum 's suggestion is the comment is nice, you could also use the float_format
argument (see in the docs)
In [28]: a
Out[28]:
a b
0 0 1
1 1 NaN
2 2 3
In [31]: a.to_csv(r'c:\x.csv', float_format = '%.0f')
Gives out:
,a,b
0,0,1
1,1,
2,2,3
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With