I have a #-separated file with three columns: the first is integer, the second looks like a float, but isn't, and the third is a string. I attempt to load this directly into python with pandas.read_csv
In [149]: d = pandas.read_csv('resources/names/fos_names.csv', sep='#', header=None, names=['int_field', 'floatlike_field', 'str_field'])
In [150]: d
Out[150]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1673 entries, 0 to 1672
Data columns:
int_field 1673 non-null values
floatlike_field 1673 non-null values
str_field 1673 non-null values
dtypes: float64(1), int64(1), object(1)
pandas
tries to be smart and automatically convert fields to a useful type. The issue is that I don't actually want it to do so (if I did, I'd used the converters
argument). How can I prevent pandas
from converting types automatically?
I'm planning to add explicit column dtypes in the upcoming file parser engine overhaul in pandas 0.10. Can't commit myself 100% to it but it should be pretty simple with the new infrastructure coming together (http://wesmckinney.com/blog/?p=543).
I think your best bet is to read the data in as a record array first using numpy.
# what you described:
In [15]: import numpy as np
In [16]: import pandas
In [17]: x = pandas.read_csv('weird.csv')
In [19]: x.dtypes
Out[19]:
int_field int64
floatlike_field float64 # what you don't want?
str_field object
In [20]: datatypes = [('int_field','i4'),('floatlike','S10'),('strfield','S10')]
In [21]: y_np = np.loadtxt('weird.csv', dtype=datatypes, delimiter=',', skiprows=1)
In [22]: y_np
Out[22]:
array([(1, '2.31', 'one'), (2, '3.12', 'two'), (3, '1.32', 'three ')],
dtype=[('int_field', '<i4'), ('floatlike', '|S10'), ('strfield', '|S10')])
In [23]: y_pandas = pandas.DataFrame.from_records(y_np)
In [25]: y_pandas.dtypes
Out[25]:
int_field int64
floatlike object # better?
strfield object
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With