Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas - group by consecutive ranges

I have a dataframe with the following structure - Start, End and Height.

Some properties of the dataframe:

  • A row in the dataframe always starts from where the previous row ended i.e. if the end for row n is 100 then the start of line n+1 is 101.
  • The height of row n+1 is always different then the height in row n+1 (this is the reason the data is in different rows).

I'd like to group the dataframe in a way that heights will be grouped in buckets of 5 longs i.e. the buckets are 0, 1-5, 6-10, 11-15 and >15.

See code example below where what I'm looking for is the implemetation of group_by_bucket function.

I tried looking at other questions but couldn't get exact answer to what I was looking for.

Thanks in advance!

>>> d = pd.DataFrame([[1,3,5], [4,10,7], [11,17,6], [18,26, 12], [27,30, 15], [31,40,6], [41, 42, 7]], columns=['start','end', 'height'])
>>> d
   start  end  height
0      1    3       8
1      4   10       7
2     11   17       6
3     18   26      12
4     27   30      15
5     31   40       6
6     41   42       7
>>> d_gb = group_by_bucket(d)
>>> d_gb
   start  end height_grouped
0      1   17           6_10
1     18   30          11_15
2     31   42           6_10
like image 757
Moshe Einhorn Avatar asked Apr 25 '16 08:04

Moshe Einhorn


2 Answers

A way to do that :

df = pd.DataFrame([[1,3,10], [4,10,7], [11,17,6], [18,26, 12],
[27,30, 15], [31,40,6], [41, 42, 6]], columns=['start','end', 'height'])

Use cut to make groups :

df['groups']=pd.cut(df.height,[-1,0,5,10,15,1000])

Find break points :

df['categories']=(df.groups!=df.groups.shift()).cumsum()

Then df is :

"""
   start  end  height    groups  categories
0      1    3      10   (5, 10]           0
1      4   10       7   (5, 10]           0
2     11   17       6   (5, 10]           0
3     18   26      12  (10, 15]           1
4     27   30      15  (10, 15]           1
5     31   40       6   (5, 10]           2
6     41   42       6   (5, 10]           2
"""

Define interesting data :

f = {'start':['first'],'end':['last'], 'groups':['first']}

And use the groupby.agg function :

df.groupby('categories').agg(f)
"""
              groups  end start
               first last first
categories                     
0            (5, 10]   17     1
1           (10, 15]   30    18
2            (5, 10]   42    31
"""
like image 188
B. M. Avatar answered Nov 14 '22 00:11

B. M.


You can use cut with groupby by cut and Series with cumsum for generating groups and aggregate by agg, first and last:

bins = [-1,0,1,5,10,15,100]
print bins
[-1, 0, 1, 5, 10, 15, 100]

cut_ser = pd.cut(d['height'], bins=bins)
print cut_ser
0     (5, 10]
1     (5, 10]
2     (5, 10]
3    (10, 15]
4    (10, 15]
5     (5, 10]
6     (5, 10]
Name: height, dtype: category
Categories (6, object): [(-1, 0] < (0, 1] < (1, 5] < (5, 10] < (10, 15] < (15, 100]]

print (cut_ser.shift() != cut_ser).cumsum()
0    0
1    0
2    0
3    1
4    1
5    2
6    2
Name: height, dtype: int32

print d.groupby([(cut_ser.shift() != cut_ser).cumsum(), cut_ser])
       .agg({'start' : 'first','end' : 'last'})
       .reset_index(level=1).reset_index(drop=True)
       .rename(columns={'height':'height_grouped'})

  height_grouped  start  end
0        (5, 10]      1   17
1       (10, 15]     18   30
2        (5, 10]     31   42

EDIT:

Timings:

In [307]: %timeit a(df)
100 loops, best of 3: 5.45 ms per loop

In [308]: %timeit b(d)
The slowest run took 4.45 times longer than the fastest. This could mean that an intermediate result is being cached 
100 loops, best of 3: 3.28 ms per loop

Code:

d = pd.DataFrame([[1,3,5], [4,10,7], [11,17,6], [18,26, 12], [27,30, 15], [31,40,6], [41, 42, 7]], columns=['start','end', 'height'])
print d

df = d.copy()


def a(df):
    df['groups']=pd.cut(df.height,[-1,0,5,10,15,1000])
    df['categories']=(df.groups!=df.groups.shift()).cumsum()
    f = {'start':['first'],'end':['last'], 'groups':['first']}
    return df.groupby('categories').agg(f)

def b(d):
    bins = [-1,0,1,5,10,15,100]
    cut_ser = pd.cut(d['height'], bins=bins)
    return d.groupby([(cut_ser.shift() != cut_ser).cumsum(), cut_ser]).agg({'start' : 'first','end' : 'last'}).reset_index(level=1).reset_index(drop=True).rename(columns={'height':'height_grouped'})


print a(df)    
print b(d)
like image 42
jezrael Avatar answered Nov 14 '22 01:11

jezrael