Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

k-fold cross validation using DataLoaders in PyTorch

I have splitted my training dataset into 80% train and 20% validation data and created DataLoaders as shown below. However I do not want to limit my model's training. So I thought of splitting my data into K(maybe 5) folds and performing cross-validation. However I do not know how to combine the datasets to my dataloader after splitting them.

train_size = int(0.8 * len(full_dataset))
validation_size = len(full_dataset) - train_size
train_dataset, validation_dataset = random_split(full_dataset, [train_size, validation_size])

full_loader = DataLoader(full_dataset, batch_size=4,sampler = sampler_(full_dataset), pin_memory=True) 
train_loader = DataLoader(train_dataset, batch_size=4, sampler = sampler_(train_dataset))
val_loader = DataLoader(validation_dataset, batch_size=1, sampler = sampler_(validation_dataset))
like image 491
Suraj Subramanian Avatar asked Mar 27 '20 09:03

Suraj Subramanian


Video Answer


1 Answers

I just wrote a cross validation function work with dataloader and dataset. Here is my code, hope this is helpful.

# define a cross validation function
def crossvalid(model=None,criterion=None,optimizer=None,dataset=None,k_fold=5):
    
    train_score = pd.Series()
    val_score = pd.Series()
    
    total_size = len(dataset)
    fraction = 1/k_fold
    seg = int(total_size * fraction)
    # tr:train,val:valid; r:right,l:left;  eg: trrr: right index of right side train subset 
    # index: [trll,trlr],[vall,valr],[trrl,trrr]
    for i in range(k_fold):
        trll = 0
        trlr = i * seg
        vall = trlr
        valr = i * seg + seg
        trrl = valr
        trrr = total_size
        # msg
#         print("train indices: [%d,%d),[%d,%d), test indices: [%d,%d)" 
#               % (trll,trlr,trrl,trrr,vall,valr))
        
        train_left_indices = list(range(trll,trlr))
        train_right_indices = list(range(trrl,trrr))
        
        train_indices = train_left_indices + train_right_indices
        val_indices = list(range(vall,valr))
        
        train_set = torch.utils.data.dataset.Subset(dataset,train_indices)
        val_set = torch.utils.data.dataset.Subset(dataset,val_indices)
        
#         print(len(train_set),len(val_set))
#         print()
        
        train_loader = torch.utils.data.DataLoader(train_set, batch_size=50,
                                          shuffle=True, num_workers=4)
        val_loader = torch.utils.data.DataLoader(val_set, batch_size=50,
                                          shuffle=True, num_workers=4)
        train_acc = train(res_model,criterion,optimizer,train_loader,epoch=1)
        train_score.at[i] = train_acc
        val_acc = valid(res_model,criterion,optimizer,val_loader)
        val_score.at[i] = val_acc
    
    return train_score,val_score
        

train_score,val_score = crossvalid(res_model,criterion,optimizer,dataset=tiny_dataset)


In order to give an intuition of correctness for what we are doing, see the output below:

train indices: [0,0),[3600,18000), test indices: [0,3600)
14400 3600

train indices: [0,3600),[7200,18000), test indices: [3600,7200)
14400 3600

train indices: [0,7200),[10800,18000), test indices: [7200,10800)
14400 3600

train indices: [0,10800),[14400,18000), test indices: [10800,14400)
14400 3600

train indices: [0,14400),[18000,18000), test indices: [14400,18000)
14400 3600
like image 174
Skipper Avatar answered Sep 20 '22 09:09

Skipper