I have a data frame containing information about a population that i wish to generate a sample from. I also have a dataframe sample_info
that details how many units of each group in the population
dataframe I need in my sample. I have developed some code that achieves what i need but it runs slower than i would like given the large datasets i am working with.
Is there a way to group the population frame and apply sampling to the groups rather than looping through them as i have done below?
import pandas as pd
population = pd.DataFrame([[1,True],[1,False],[1,False],[2,True],[2,True],[2,False],[2, True]], columns = ['Group ID','Response'])
Group ID Response
0 1 True
1 1 False
2 1 False
3 2 True
4 2 True
5 2 False
6 2 True
sample_info = pd.DataFrame([[1,5],[2,6]], columns = ['Group ID','Sample Size'])
output = pd.DataFrame(columns = ['Group ID','Response'])
Group ID Sample Size
0 1 5
1 2 6
for index, row in sample_info.iterrows():
output = output.append(population.loc[population['Group ID'] == row['Group ID']].sample(n=row['Sample Size'], replace = True))
I couldn't figure out to bring in the sample size information using group-by and apply as suggested in Pandas: sample each group after groupby
Convert sample_info to dictionary. Group population by Group ID. Pass the sample size values to DataFrame.sample using the dictionary.
mapper = sample_info.set_index('Group ID')['Sample Size'].to_dict()
population.groupby('Group ID').apply(lambda x: x.sample(n=mapper.get(x.name))).reset_index(drop = True)
I am not sure about the speed but sample the index looks like save the memory at least
d=population.groupby('Group ID').groups
a=np.concatenate([np.random.choice(d[x],y) for x, y in zip(sample_info['Group ID'],sample_info['Sample Size']) ])
population.loc[a]
Out[83]:
Group ID Response
1 1 False
1 1 False
2 1 False
0 1 True
1 1 False
3 2 True
5 2 False
3 2 True
4 2 True
5 2 False
5 2 False
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With