Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fastest way to find compute function on DataFrame slices by column value (Python pandas)

I am trying to create a column on a data frame which contains the minimum of column A (the value column), for which column B (the id column) has a particular value. My code is really slow. I'm looking for a faster way to do this. Here is my little function:

def apply_by_id_value(df, id_col="id_col", val_col="val_col", offset_col="offset", f=min):
    for rid in set(df[id_col].values):
        df.loc[df[id_col] == rid, offset_col] =  f(df[df[id_col] == rid][val_col])
    return df

And example usage:

import pandas as pd
import numpy as np
# create data frame
df = pd.DataFrame({"id_col":[0, 0, 0, 1, 1, 1, 2, 2, 2], 
                   "val_col":[0.1, 0.2, 0.3, 0.6, 0.4, 0.5, 0.2, 0.1, 0.0]})

print df.head(10)
# output
   id_col  val_col
0       0      0.1
1       0      0.2
2       0      0.3
3       1      0.6
4       1      0.4
5       1      0.5
6       2      0.2
7       2      0.1
8       2      0.0

df = apply_by_id_value(df)
print df.head(10) 
# output

   id_col  val_col  offset
0       0      0.1     0.1
1       0      0.2     0.1
2       0      0.3     0.1
3       1      0.6     0.4
4       1      0.4     0.4
5       1      0.5     0.4
6       2      0.2     0.0
7       2      0.1     0.0
8       2      0.0     0.0

Some more context: In my real data, the "id_col" column has some 30000 or more unique values. This means that the data frame has to be sliced 30000 times. I imagine this is the bottleneck.

like image 236
nikosd Avatar asked Jun 03 '15 19:06

nikosd


1 Answers

Perform a groupby on 'id_col' and then a transform passing function 'min', this will return a Series aligned to your original df so you can add as a new column:

In [13]:

df = pd.DataFrame({"id_col":[0, 0, 0, 1, 1, 1, 2, 2, 2], 
                   "val_col":[0.1, 0.2, 0.3, 0.6, 0.4, 0.5, 0.2, 0.1, 0.0]})
df['offset'] = df.groupby('id_col').transform('min')
df
Out[13]:
   id_col  val_col  offset
0       0      0.1     0.1
1       0      0.2     0.1
2       0      0.3     0.1
3       1      0.6     0.4
4       1      0.4     0.4
5       1      0.5     0.4
6       2      0.2     0.0
7       2      0.1     0.0
8       2      0.0     0.0

timings

In [15]:

def apply_by_id_value(df, id_col="id_col", val_col="val_col", offset_col="offset", f=min):
    for rid in set(df[id_col].values):
        df.loc[df[id_col] == rid, offset_col] =  f(df[df[id_col] == rid][val_col])
    return df
%timeit apply_by_id_value(df)
%timeit df.groupby('id_col').transform('min')
100 loops, best of 3: 8.12 ms per loop
100 loops, best of 3: 5.99 ms per loop

So the groupby and transform is faster on this dataset, I expect it to be significantly faster on your real dataset as it will scale better.

For a 800,000 row df I get the following timings:

1 loops, best of 3: 611 ms per loop
1 loops, best of 3: 438 ms per loop
like image 198
EdChum Avatar answered Sep 29 '22 11:09

EdChum