I have the following data:
pd.DataFrame({'Group_ID':[1,1,1,2,2,2,3,4,5,5],
'Item_id':[1,2,3,4,5,6,7,8,9,10],
'Target': [0,0,1,0,1,1,0,0,0,1]})
Group_ID Item_id Target
0 1 1 0
1 1 2 0
2 1 3 1
3 2 4 0
4 2 5 1
5 2 6 1
6 3 7 0
7 4 8 0
8 5 9 0
9 5 10 1
I need to split the dataset into a training and testing set based on the "Group_ID" so that 80% of the data goes into a training set and 20% into a test set.
That is, I need my training set to look something like:
Group_ID Item_id Target
0 1 1 0
1 1 2 0
2 1 3 1
3 2 4 0
4 2 5 1
5 2 6 1
6 3 7 0
7 4 8 0
And test set:
Test Set
Group_ID Item_id Target
8 5 9 0
9 5 10 1
What would be the simplest way to do this? As far as I know, the standard test_train_split function in sklearn does not support splitting by groups in a way where I can also indicate the size of the split (e.g. 80/20).
We can use the train_test_split to first make the split on the original dataset. Then, to get the validation set, we can apply the same function to the train set to get the validation set. In the function below, the test set size is the ratio of the original data we want to use as the test set.
In order to avoid this, we can perform something called cross validation. It's very similar to train/test split, but it's applied to more subsets. Meaning, we split our data into k subsets, and train on k-1 one of those subset.
I figured out the answer. This seems to work:
splitter = GroupShuffleSplit(test_size=.20, n_splits=2, random_state = 7)
split = splitter.split(df, groups=df['Group_Id'])
train_inds, test_inds = next(split)
train = df.iloc[train_inds]
test = df.iloc[test_inds]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With