Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Pandas Choosing Random Sample of Groups from Groupby

What is the best way to get a random sample of the elements of a groupby? As I understand it, a groupby is just an iterable over groups.

The standard way I would do this for an iterable, if I wanted to select N = 200 elements is:

rand = random.sample(data, N)  

If you attempt the above where data is a 'grouped' the elements of the resultant list are tuples for some reason.

I found the below example for randomly selecting the elements of a single key groupby, however this does not work with a multi-key groupby. From, How to access pandas groupby dataframe by key

create groupby object

grouped = df.groupby('some_key')

pick N dataframes and grab their indices

sampled_df_i = random.sample(grouped.indices, N)

grab the groups using the groupby object 'get_group' method

df_list = map(lambda df_i: grouped.get_group(df_i),sampled_df_i)

optionally - turn it all back into a single dataframe object

sampled_df = pd.concat(df_list, axis=0, join='outer')
like image 485
sfortney Avatar asked Sep 01 '15 20:09

sfortney


People also ask

How do you randomly select a subset of data in Python?

How to get a random subset of data. To randomly select rows from a pandas dataframe, we can use sample function from Pandas. For example, to randomly select n=3 rows, we use sample with the argument n. Every time, we run “sample” we will get randomly selected 3 rows from the Pandas dataframe.

What is possible using Groupby () method of pandas?

groupby() can accept several different arguments: A column or list of columns. A dict or pandas Series. A NumPy array or pandas Index , or an array-like iterable of these.

How do I select a row in Groupby pandas?

You can group DataFrame rows into a list by using pandas. DataFrame. groupby() function on the column of interest, select the column you want as a list from group and then use Series. apply(list) to get the list for every group.


2 Answers

I feel like lower-level numpy operations are cleaner:

import pandas as pd
import numpy as np

df = pd.DataFrame(
    {
        "some_key": [0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3],
        "val": [1, 2, 3, 4, 1, 5, 1, 5, 1, 6, 7, 8],
    }
)

ids = df["some_key"].unique()
ids = np.random.choice(ids, size=2, replace=False)
ids

# > array([3, 2])

df.loc[df["some_key"].isin(ids)]

# >     some_key  val
# 2          2    3
# 3          3    4
# 6          2    1
# 7          3    5
# 10         2    7
# 11         3    8
like image 145
jsta Avatar answered Oct 07 '22 00:10

jsta


You can take a randoms sample of the unique values of df.some_key.unique(), use that to slice the df and finally groupby on the resultant:

In [337]:

df = pd.DataFrame({'some_key': [0,1,2,3,0,1,2,3,0,1,2,3],
                   'val':      [1,2,3,4,1,5,1,5,1,6,7,8]})
In [338]:

print df[df.some_key.isin(random.sample(df.some_key.unique(),2))].groupby('some_key').mean()
               val
some_key          
0         1.000000
2         3.666667

If there are more than one groupby keys:

In [358]:

df = pd.DataFrame({'some_key1':[0,1,2,3,0,1,2,3,0,1,2,3],
                   'some_key2':[0,0,0,0,1,1,1,1,2,2,2,2],
                   'val':      [1,2,3,4,1,5,1,5,1,6,7,8]})
In [359]:

gby = df.groupby(['some_key1', 'some_key2'])
In [360]:

print gby.mean().ix[random.sample(gby.indices.keys(),2)]
                     val
some_key1 some_key2     
1         1            5
3         2            8

But if you are just going to get the values of each group, you don't even need to groubpy, MultiIndex will do:

In [372]:

idx = random.sample(set(pd.MultiIndex.from_product((df.some_key1, df.some_key2)).tolist()),
                    2)
print df.set_index(['some_key1', 'some_key2']).ix[idx]
                     val
some_key1 some_key2     
2         0            3
3         1            5
like image 26
CT Zhu Avatar answered Oct 07 '22 00:10

CT Zhu