Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Faster way to accomplish this Pandas job than by using Apply for large data set?

Tags:

python

pandas

I have a large dataset of CSV files comprised of two distinct objects: object_a and object_b. Each of these entities has a numeric tick value as well.

Type,       Parent Name, Ticks
object_a,   4556421,     34
object_a,   4556421,     0
object_b,   4556421,     0
object_a,   3217863,     2
object_b,   3217863,     1
......

Each object shares a Parent Name value so in most cases, one of each object will share a Parent Name value but this is not always the case.

I have two objectives with this dataset:

  • extract all object_a's under a Parent Name where i) there are >1 object_a's and; ii) the object_a has 0 ticks but the other object_a has >0 ticks. i.e. just the one with zero ticks
  • extract all object_b's under a Parent Name where i) there is >=1 object_a and; ii) the object_b has 0 ticks but the object_a has >0 ticks

My first approach is to have two separate functions for both tasks, read the CSV files (usually 1.5GB in size) in chunks and output the extracted rows to another csv file after grouping them according to Parent Name...

def objective_one(group_name, group_df):

   group_df = group_df[group_df['Type'] == 'object_a']

   if len(group_df) > 1:        
       zero_tick_object_a = group_df[group_df['Ticks'] == 0]        
       if len(zero_click_object_a) < len(group_df):        
           return zero_click_object_a        
       else:        
           return pd.DataFrame(columns=group_df.columns)
   else:        
       return pd.DataFrame(columns=group_df.columns)


def objective_two(group_name, group_df):

   object_a_in_group_df = group_df[group_df['Type'] == 'object_a']
   object_b_has_no_clicks_in_group_df = group_df[(group_df['Type'] == 'object_b') & (group_df['Ticks'] == 0)]

   if len(object_a_in_group_df) >= 1 and len(object_b_has_no_ticks_in_group_df) >= 1:

       has_ticks_objects = objects_in_group_df[object_a_in_group_df['Ticks'] >= 1]

       if len(has_ticks_object_a) > 0:        
           return object_B_has_no_ticks_in_group_df        
       else:        
           return pd.DataFrame(columns=group_df.columns)
   else:        
       return pd.DataFrame(columns=group_df.columns)

Here are the calls to these functions in the main method:

for chunk in pd.read_csv(file, chunksize=500000):

   #objective one
   chunk_object_a = chunk.groupby(['Parent Name']).apply(lambda g: objective_one(g.name, g))
   ....
   ....
   #objective two
   chunk_object_b = chunk.groupby(['Parent Name']).apply(lambda g: objective_two(g.name, g))

# Then write the dataframes outputted by the apply methods to a csv file

The problem with this approach is that while it does get me the output I want, it is very slow in large files in the 1GB and above range. Another issue is that reading it in chunks from the CSV may effectively cut some groups in half( i.e. a Parent Name could be split over one chunk and the next, making for inaccurate number of objects extracted)

Is there any way to optimize this to make it any faster and also, get around my chunk problem?

like image 685
GreenGodot Avatar asked Sep 01 '17 15:09

GreenGodot


People also ask

How do you deal with a large panda's Dataframe?

Another way to drastically reduce the size of your Pandas Dataframe is to transform columns of dtype Object to category . Rather than having copies of the same string at many positions in your dataframe, pandas will have a single copy from each string and will use pointers under the hood that refer to these strings.

Is pandas efficient for large data sets?

The default pandas data types are not the most memory efficient. This is especially true for text data columns with relatively few unique values (commonly referred to as “low-cardinality” data). By using more efficient data types, you can store larger datasets in memory.

How can I make my pandas 100x faster?

If your function is I/O bound, meaning that it is spending a lot of time waiting for data (e.g. making api requests over the internet), then multithreading (or thread pool) will be the better and faster option.


2 Answers

My shot at the problem:

  • extract all object_a's under a Parent Name where i) there are >1 object_a's and; ii) the object_a has 0 ticks but the other object_a has >0 ticks. i.e. just the one with zero ticks
  • extract all object_b's under a Parent Name where i) there is >=1 object_a and; ii) the object_b has 0 ticks but the object_a has >0 ticks

My first impression when reading this is that the actual "Type" doesn't really matter, we just want an existing object_a with >0 Ticks for each group, and extract all the elements with 0 ticks, regardless of their type.

Considering that, my approach was first to create a new column to count the number of object_a ticks for any parent. If this number is >0, it means that at least 1 object_a exists with Ticks>0.

In [63]: df.groupby(['Parent Name']).apply(lambda x: x[x['Type'] == 'object_a']['Ticks'].sum())
Out[63]: 
Parent Name
3217863     2
4556421    34
dtype: int64

Let's now merge that into the original DataFrame...

In [64]: sumATicks = df.groupby(['Parent Name']).apply(lambda x: x[x['Type'] == 'object_a']['Ticks'].sum())

In [65]: merged = df.merge(pd.DataFrame(sumATicks).rename(columns={0: 'nbATicks'}), left_on='Parent Name', right_index=True)

In [66]: merged
Out[66]: 
       Type  Parent Name  Ticks  nbATicks
0  object_a      4556421     34        34
1  object_a      4556421      0        34
2  object_b      4556421      0        34
3  object_a      3217863      2         2
4  object_b      3217863      1         2

...and extract all the interesting rows, according to the criteria I stated above:

In [67]: merged[(merged['nbATicks'] > 0) & (merged['Ticks'] == 0)]
Out[67]: 
       Type  Parent Name  Ticks  nbATicks
1  object_a      4556421      0        34
2  object_b      4556421      0        34

Hopefully I didn't forget any fringe case...

Regarding the chunk problem, why don't you just load the whole csv file in memory ? If it's that big, you can try sorting by ParentName before processing, and splitting the chunks at relevant places.

like image 108
3kt Avatar answered Sep 21 '22 07:09

3kt


Here comes my idea for the problem:

I think the first objective is easier because we only depend on rows with object_a. We can use transform to convert the conditions into boolean list:

df_1 = df.loc[df['Type']=='object_a']
object_a = df_1.loc[(df_1.groupby('Parent_Name')['Ticks'].transform(min)==0)&
                    (df_1.groupby('Parent_Name')['Ticks'].transform(max)>0)&
                    (a['Ticks']==0)
                   ]
Out[1]: 
       Type  Parent_Name  Ticks
1  object_a      4556421      0

For the second objective i create a list of Parent_Names meeting the requirements for object_a. In the next step isin is used to selected only the corresponding rows.

a_condition = df.loc[df['Type']=='object_a'].groupby('Parent_Name').sum()
a_condition = a_condition[a_condition>0].index

object_b = df.loc[(df['Type']=='object_b')&
                  (df['Ticks']==0)&
                  (df['Parent_Name'].isin(a_condition))
                 ]
Out[2]: 
       Type  Parent_Name  Ticks
2  object_b      4556421      0
like image 30
P.Tillmann Avatar answered Sep 21 '22 07:09

P.Tillmann