I have a rather large CSV file, it contains 9917530 rows (without the header), and 54 columns. Columns are real or integer, only one contains dates. There is a few NULL values on the file, which are translated to nan
after I load it to pandas DataFrame
, which I do like this:
import pandas as pd
data = pd.read_csv('data.csv')
After loading, which I think was very fast, cause it took around 30 seconds (pretty much the same time as counting lines with the Unix tool wc
), the process was taking around 4Gb of RAM (the size of of the file on disk: 2.2 Gb. So far so good.
Then I tried to do the following:
column_means = data.mean()
The process' occupied memory grew to ~22Gb very quickly. I could also see the processor (one core) was very very busy - for like three hours, after that I killed the process, cause I needed to use the machine for other things. I have a pretty fast PC with Linux - it has 2 processors, each having 4 cores, so it's 8 cores all together, and 32 Gb of RAM. I cannot believe calculating column means should take so long.
Can anybody explain why DataFrame.mean()
is so slow? And more importantly, what is a better way of calculating means of columns of a file like that? Did I not load the file the best way possible, should I use a different function instead of DataFrame.mean()
or perhaps a completely different tool?
Many thanks in advance.
EDIT. Here is what df.info()
shows:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9917530 entries, 0 to 9917529
Data columns (total 54 columns):
srch_id 9917530 non-null values
date_time 9917530 non-null values
site_id 9917530 non-null values
visitor_location_country_id 9917530 non-null values
visitor_hist_starrating 505297 non-null values
visitor_hist_adr_usd 507612 non-null values
prop_country_id 9917530 non-null values
prop_id 9917530 non-null values
prop_starrating 9917530 non-null values
prop_review_score 9902900 non-null values
prop_brand_bool 9917530 non-null values
prop_location_score1 9917530 non-null values
prop_location_score2 7739150 non-null values
prop_log_historical_price 9917530 non-null values
position 9917530 non-null values
price_usd 9917530 non-null values
promotion_flag 9917530 non-null values
srch_destination_id 9917530 non-null values
srch_length_of_stay 9917530 non-null values
srch_booking_window 9917530 non-null values
srch_adults_count 9917530 non-null values
srch_children_count 9917530 non-null values
srch_room_count 9917530 non-null values
srch_saturday_night_bool 9917530 non-null values
srch_query_affinity_score 635564 non-null values
orig_destination_distance 6701069 non-null values
random_bool 9917530 non-null values
comp1_rate 235806 non-null values
comp1_inv 254433 non-null values
comp1_rate_percent_diff 184907 non-null values
comp2_rate 4040633 non-null values
comp2_inv 4251538 non-null values
comp2_rate_percent_diff 1109847 non-null values
comp3_rate 3059273 non-null values
comp3_inv 3292221 non-null values
comp3_rate_percent_diff 944007 non-null values
comp4_rate 620099 non-null values
comp4_inv 692471 non-null values
comp4_rate_percent_diff 264213 non-null values
comp5_rate 4444294 non-null values
comp5_inv 4720833 non-null values
comp5_rate_percent_diff 1681006 non-null values
comp6_rate 482487 non-null values
comp6_inv 524145 non-null values
comp6_rate_percent_diff 193312 non-null values
comp7_rate 631077 non-null values
comp7_inv 713175 non-null values
comp7_rate_percent_diff 277838 non-null values
comp8_rate 3819043 non-null values
comp8_inv 3960388 non-null values
comp8_rate_percent_diff 1225707 non-null values
click_bool 9917530 non-null values
gross_bookings_usd 276592 non-null values
booking_bool 9917530 non-null values
dtypes: float64(34), int64(19), object(1)None
Here's a similar sized from , but without an object column
In [10]: nrows = 10000000
In [11]: df = pd.concat([DataFrame(randn(int(nrows),34),columns=[ 'f%s' % i for i in range(34) ]),DataFrame(randint(0,10,size=int(nrows*19)).reshape(int(nrows),19),columns=[ 'i%s' % i for i in range(19) ])],axis=1)
In [12]: df.iloc[1000:10000,0:20] = np.nan
In [13]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000000 entries, 0 to 9999999
Data columns (total 53 columns):
f0 9991000 non-null values
f1 9991000 non-null values
f2 9991000 non-null values
f3 9991000 non-null values
f4 9991000 non-null values
f5 9991000 non-null values
f6 9991000 non-null values
f7 9991000 non-null values
f8 9991000 non-null values
f9 9991000 non-null values
f10 9991000 non-null values
f11 9991000 non-null values
f12 9991000 non-null values
f13 9991000 non-null values
f14 9991000 non-null values
f15 9991000 non-null values
f16 9991000 non-null values
f17 9991000 non-null values
f18 9991000 non-null values
f19 9991000 non-null values
f20 10000000 non-null values
f21 10000000 non-null values
f22 10000000 non-null values
f23 10000000 non-null values
f24 10000000 non-null values
f25 10000000 non-null values
f26 10000000 non-null values
f27 10000000 non-null values
f28 10000000 non-null values
f29 10000000 non-null values
f30 10000000 non-null values
f31 10000000 non-null values
f32 10000000 non-null values
f33 10000000 non-null values
i0 10000000 non-null values
i1 10000000 non-null values
i2 10000000 non-null values
i3 10000000 non-null values
i4 10000000 non-null values
i5 10000000 non-null values
i6 10000000 non-null values
i7 10000000 non-null values
i8 10000000 non-null values
i9 10000000 non-null values
i10 10000000 non-null values
i11 10000000 non-null values
i12 10000000 non-null values
i13 10000000 non-null values
i14 10000000 non-null values
i15 10000000 non-null values
i16 10000000 non-null values
i17 10000000 non-null values
i18 10000000 non-null values
dtypes: float64(34), int64(19)
Timings (similar machine specs to you)
In [14]: %timeit df.mean()
1 loops, best of 3: 21.5 s per loop
You can get a 2x speedup by pre-converting to floats (mean does this, but does it in a more general way, so slower)
In [15]: %timeit df.astype('float64').mean()
1 loops, best of 3: 9.45 s per loop
You problem is the object column. Mean will try to calculate for all of the columns, but because of the object column everything is upcast to object
dtype which is not efficient for calculating.
Best bet is to do
df._get_numeric_data().mean()
There is an option to do this numeric_only
, at the lower level, but for some reason we don't directly support this via the top-level functions (e.g. mean). I think will create an issue to add this parameter. However will prob be False
by default (to not-exclude).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With