Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas reading csv as string type

I have a data frame with alpha-numeric keys which I want to save as a csv and read back later. For various reasons I need to explicitly read this key column as a string format, I have keys which are strictly numeric or even worse, things like: 1234E5 which Pandas interprets as a float. This obviously makes the key completely useless.

The problem is when I specify a string dtype for the data frame or any column of it I just get garbage back. I have some example code here:

df = pd.DataFrame(np.random.rand(2,2),                   index=['1A', '1B'],                   columns=['A', 'B']) df.to_csv(savefile) 

The data frame looks like:

           A         B 1A  0.209059  0.275554 1B  0.742666  0.721165 

Then I read it like so:

df_read = pd.read_csv(savefile, dtype=str, index_col=0) 

and the result is:

   A  B B  (  < 

Is this a problem with my computer, or something I'm doing wrong here, or just a bug?

like image 768
daver Avatar asked Jun 07 '13 16:06

daver


People also ask

Which CSV format to use for pandas?

Pandas read_csv() function imports a CSV file to DataFrame format. header: this allows you to specify which row will be used as column names for your dataframe.

What is encoding in read_csv?

Source from Kaggle character encoding. The Pandas read_csv() function has an argument call encoding that allows you to specify an encoding to use when reading a file. Let's take a look at an example below: First, we create a DataFrame with some Chinese characters and save it with encoding='gb2312' .


2 Answers

Update: this has been fixed: from 0.11.1 you passing str/np.str will be equivalent to using object.

Use the object dtype:

In [11]: pd.read_csv('a', dtype=object, index_col=0) Out[11]:                       A                     B 1A  0.35633069074776547     0.745585398803751 1B  0.20037376323337375  0.013921830784260236 

or better yet, just don't specify a dtype:

In [12]: pd.read_csv('a', index_col=0) Out[12]:            A         B 1A  0.356331  0.745585 1B  0.200374  0.013922 

but bypassing the type sniffer and truly returning only strings requires a hacky use of converters:

In [13]: pd.read_csv('a', converters={i: str for i in range(100)}) Out[13]:                       A                     B 1A  0.35633069074776547     0.745585398803751 1B  0.20037376323337375  0.013921830784260236 

where 100 is some number equal or greater than your total number of columns.

It's best to avoid the str dtype, see for example here.

like image 102
Andy Hayden Avatar answered Nov 02 '22 07:11

Andy Hayden


Like Anton T said in his comment, pandas will randomly turn object types into float types using its type sniffer, even you pass dtype=object, dtype=str, or dtype=np.str.

Since you can pass a dictionary of functions where the key is a column index and the value is a converter function, you can do something like this (e.g. for 100 columns).

pd.read_csv('some_file.csv', converters={i: str for i in range(0, 100)}) 

You can even pass range(0, N) for N much larger than the number of columns if you don't know how many columns you will read.

like image 42
Chris Conlan Avatar answered Nov 02 '22 08:11

Chris Conlan