Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas: assign an index to each group identified by groupby

Tags:

python

pandas

When using groupby(), how can I create a DataFrame with a new column containing an index of the group number, similar to dplyr::group_indices in R. For example, if I have

>>> df=pd.DataFrame({'a':[1,1,1,2,2,2],'b':[1,1,2,1,1,2]})
>>> df
   a  b
0  1  1
1  1  1
2  1  2
3  2  1
4  2  1
5  2  2

How can I get a DataFrame like

   a  b  idx
0  1  1  1
1  1  1  1
2  1  2  2
3  2  1  3
4  2  1  3
5  2  2  4

(the order of the idx indexes doesn't matter)

like image 745
user2667066 Avatar asked Jan 11 '17 15:01

user2667066


People also ask

How do you get index after Groupby pandas?

Python's groupby() function is versatile. It is used to split the data into groups based on some criteria like mean, median, value_counts, etc. In order to reset the index after groupby() we will use the reset_index() function.

Can you Groupby index in pandas?

How to perform groupby index in pandas? Pass index name of the DataFrame as a parameter to groupby() function to group rows on an index. DataFrame. groupby() function takes string or list as a param to specify the group columns or index.

Can you group by for an index in Python?

Use the groupby() Function to Group by Index Columns in Python Pandas. In this post, Pandas DataFrame data. groupby() function divides data into groups based on specific criteria. Pandas objects can be divided into any number of groups along any axis.

Does Groupby object supports column indexing?

First look at GroupBy We did not tell GroupBy which column we wanted it to apply the aggregation function on, so it applied it to all the relevant columns and returned the output. But fortunately, GroupBy object supports column indexing just like a DataFrame!


6 Answers

Here is the solution using ngroup (available as of pandas 0.20.2) from a comment above by Constantino.

import pandas as pd
df = pd.DataFrame({'a':[1,1,1,2,2,2],'b':[1,1,2,1,1,2]})
df['idx'] = df.groupby(['a', 'b']).ngroup()
df
   a  b  idx
0  1  1    0
1  1  1    0
2  1  2    1
3  2  1    2
4  2  1    2
5  2  2    3
like image 171
Calum You Avatar answered Oct 20 '22 16:10

Calum You


Here's a concise way using drop_duplicates and merge to get a unique identifier.

group_vars = ['a','b']
df.merge( df.drop_duplicates( group_vars ).reset_index(), on=group_vars )

   a  b  index
0  1  1      0
1  1  1      0
2  1  2      2
3  2  1      3
4  2  1      3
5  2  2      5

The identifier in this case goes 0,2,3,5 (just a residual of original index) but this could be easily changed to 0,1,2,3 with an additional reset_index(drop=True).

Update: Newer versions of pandas (0.20.2) offer a simpler way to do this with the ngroup method as noted in a comment to the question above by @Constantino and a subsequent answer by @CalumYou. I'll leave this here as an alternate approach but ngroup seems like the better way to do this in most cases.

like image 23
JohnE Avatar answered Oct 20 '22 16:10

JohnE


A simple way to do that would be to concatenate your grouping columns (so that each combination of their values represents a uniquely distinct element), then convert it to a pandas Categorical and keep only its labels:

df['idx'] = pd.Categorical(df['a'].astype(str) + '_' + df['b'].astype(str)).codes
df

    a   b   idx
0   1   1   0
1   1   1   0
2   1   2   1
3   2   1   2
4   2   1   2
5   2   2   3

Edit: changed labels properties to codes as the former seem to be deprecated

Edit2: Added a separator as suggested by Authman Apatira

like image 42
foglerit Avatar answered Oct 20 '22 16:10

foglerit


Definetely not the most straightforward solution, but here is what I would do (comments in the code):

df=pd.DataFrame({'a':[1,1,1,2,2,2],'b':[1,1,2,1,1,2]})

#create a dummy grouper id by just joining desired rows
df["idx"] = df[["a","b"]].astype(str).apply(lambda x: "".join(x),axis=1)

print df

That would generate an unique idx for each combination of a and b.

   a  b idx
0  1  1  11
1  1  1  11
2  1  2  12
3  2  1  21
4  2  1  21
5  2  2  22

But this is still a rather silly index (think about some more complex values in columns a and b. So let's clear the index:

# create a dictionary of dummy group_ids and their index-wise representation
dict_idx = dict(enumerate(set(df["idx"])))

# switch keys and values, so you can use dict in .replace method
dict_idx = {y:x for x,y in dict_idx.iteritems()}

#replace values with the generated dict
df["idx"].replace(dict_idx,inplace=True)

print df

That would produce the desired output:

   a  b  idx
0  1  1    0
1  1  1    0
2  1  2    1
3  2  1    2
4  2  1    2
5  2  2    3
like image 24
Marjan Moderc Avatar answered Oct 20 '22 15:10

Marjan Moderc


A way that I believe is faster than the current accepted answer by about an order of magnitude (timing results below):

def create_index_usingduplicated(df, grouping_cols=['a', 'b']):
    df.sort_values(grouping_cols, inplace=True)
    # You could do the following three lines in one, I just thought 
    # this would be clearer as an explanation of what's going on:
    duplicated = df.duplicated(subset=grouping_cols, keep='first')
    new_group = ~duplicated
    return new_group.cumsum()

Timing results:

a = np.random.randint(0, 1000, size=int(1e5))
b = np.random.randint(0, 1000, size=int(1e5))
df = pd.DataFrame({'a': a, 'b': b})

In [6]: %timeit df['idx'] = pd.Categorical(df['a'].astype(str) + df['b'].astype(str)).codes
1 loop, best of 3: 375 ms per loop

In [7]: %timeit df['idx'] = create_index_usingduplicated(df, grouping_cols=['a', 'b'])
100 loops, best of 3: 17.7 ms per loop
like image 2
maxliving Avatar answered Oct 20 '22 16:10

maxliving


I'm not sure this is such a trivial problem. Here is a somewhat convoluted solution that first sorts the grouping columns and then checks whether each row is different than the previous row and if so accumulates by 1. Check further below for an answer with string data.

df.sort_values(['a', 'b']).diff().fillna(0).ne(0).any(1).cumsum().add(1)

Output

0    1
1    1
2    2
3    3
4    3
5    4
dtype: int64

So breaking this up into steps, lets see the output of df.sort_values(['a', 'b']).diff().fillna(0) which checks if each row is different than the previous row. Any non-zero entry indicates a new group.

     a    b
0  0.0  0.0
1  0.0  0.0
2  0.0  1.0
3  1.0 -1.0
4  0.0  0.0
5  0.0  1.0

A new group only need to have a single column different so this is what .ne(0).any(1) checks - not equal to 0 for any of the columns. And then just a cumulative sum to keep track of the groups.

Answer for columns as strings

#create fake data and sort it
df=pd.DataFrame({'a':list('aabbaccdc'),'b':list('aabaacddd')})
df1 = df.sort_values(['a', 'b'])

output of df1

   a  b
0  a  a
1  a  a
4  a  a
3  b  a
2  b  b
5  c  c
6  c  d
8  c  d
7  d  d

Take similar approach by checking if group has changed

df1.ne(df1.shift().bfill()).any(1).cumsum().add(1)

0    1
1    1
4    1
3    2
2    3
5    4
6    5
8    5
7    6
like image 1
Ted Petrou Avatar answered Oct 20 '22 14:10

Ted Petrou