Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Grouped sampling in scikit-learn

Is there a way to conveniently split a dataset into training and testing sets, keeping records that belong to a same group together?

Take for example a table that records independent and dependent variables for every person_id in such a way that every person may have one or more entries:

import pandas as pd

tbl = pd.DataFrame(dict(
        person_id=list('aaabbcccdeeefffhiijj'),
        random_variable=np.linspace(0, 1, 20),
        dependent_variable=np.arange(20)
    ))

Now, I want to split the data to training and testing sets, keeping the records that belong to the same person in the same data set. Obviously, using sklearn.cross_validation.train_test_split doesn't do the job. I'm aware of sklearn.cross_validation.LeavePLabelOut, but instead of creating a single split, it creates all possible combinations, which is not exactly what I currently want.

Another approach would be to calculate a hash value based on the person_id field and use it for the sampling:

import numpy as np

salt = str(np.random.rand()) # randomness source 
hash_values = tbl['person_id'].apply(lambda p: hash(salt + p) % 100)
# 50/50 split
sel_training = hash_values < 50
training_set = tbl.loc[sel_training]
testing_set = tbl.loc[-sel_training]

Is there a more elegant way to achieve this task?

like image 869
Boris Gorelik Avatar asked Nov 01 '22 00:11

Boris Gorelik


1 Answers

I wound up writing my own cross validation classes to do exactly what you are talking about. Here's the code (sorry it's not terribly clean).

class StratifiedKFold_ByColumn( object ):
    def __init__( self, n_folds, X, y, colname ):
        groupable = pd.concat( [X[colname], y], axis=1 )
        grouped = groupable.groupby( [colname] ).aggregate( max )
        self.column = X[colname]
        self.colname = colname

        # import pdb; pdb.set_trace()

        self.folds = [
            (train,val) for (train,val) in
            sklearn.cross_validation.StratifiedKFold( y=grouped.values[:,0], n_folds=n_folds, shuffle=True )
            ]
        self.n_folds = n_folds
        self.i = 0
        self.y=y

        # self.test()

    def __len__(self):
        return self.n_folds
    def __iter__( self ):
        self.i = 0
        return self

    def test( self ):
        for train,val in self.folds:
            train_mask = self.column.isin( train )
            val_mask = self.column.isin( val )
            print 'train:',self.y[train_mask].sum(), (1-self.y[train_mask]).sum()
            print 'val:',self.y[val_mask].sum(), (1-self.y[val_mask]).sum()



    def next( self ):
        if self.i < self.n_folds:
            train,val = self.folds[self.i]
            self.i += 1

            # import pdb; pdb.set_trace()
            train_mask = self.column.isin( train )
            val_mask = self.column.isin( val )

            y_train = self.y[train_mask]
            X_train = self.column[train_mask]

            n_tr_1 = (y_train!=0).sum()
            n_tr_0 = (y_train==0).sum()
            # import pdb; pdb.set_trace()
            assert n_tr_1 < n_tr_0
            stride = n_tr_0/n_tr_1

            X_train_1 = X_train[y_train!=0]
            y_train_1 = y_train[y_train!=0]
            X_train_0 = X_train[y_train==0]
            y_train_0 = y_train[y_train==0]

            train_idxs = []
            for i_1 in range(0,n_tr_1):
                train_idxs.append( X_train_1[i_1:(i_1+1)].index )
                train_idxs.append( X_train_0[i_1*stride:(i_1+1)*stride].index )
            train_idxs = flatten(train_idxs)

            val_idxs = val_mask[val_mask].index

            return np.array(train_idxs), np.array(val_idxs)
        else:
            raise StopIteration()
like image 93
Andreus Avatar answered Nov 02 '22 23:11

Andreus