Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas memory error

I have a csv file with ~50,000 rows and 300 columns. Performing the following operation is causing a memory error in Pandas (python):

merged_df.stack(0).reset_index(1)

The data frame looks like:

GRID_WISE_MW1   Col0    Col1    Col2 .... Col300
7228260         1444    1819    2042
7228261         1444    1819    2042

I am using latest pandas (0.13.1) and the bug does not occur with dataframes with fewer rows (~2,000)

thanks!

like image 775
user308827 Avatar asked Dec 20 '25 23:12

user308827


1 Answers

So it takes on my 64-bit linux (32GB) memory, a little less than 2GB.

In [5]: def f():
       df = DataFrame(np.random.randn(50000,300))
       df.stack().reset_index(1)


In [6]: %memit f()
maximum of 1: 1791.054688 MB per loop

Since you didn't specify. This won't work on 32-bit at all (as you can't usually allocate a 2GB contiguous block), but should work if you have reasonable swap / memory.

like image 114
Jeff Avatar answered Dec 23 '25 15:12

Jeff



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!