Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memory efficient way to split large numpy array into train and test

I have a large numpy array and when I run scikit learn's train_test_split to split the array into training and test data, I always run into memory errors. What would be a more memory efficient method of splitting into train and test, and why does the train_test_split cause this?

The follow code results in a memory error and causes a crash

import numpy as np
from sklearn.cross_validation import train_test_split

X = np.random.random((10000,70000))
Y = np.random.random((10000,))
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state=42)
like image 795
user1879926 Avatar asked Jul 17 '15 02:07

user1879926


1 Answers

One method that I've tried which works is to store X in a pandas dataframe and shuffle

X = X.reindex(np.random.permutation(X.index))

since I arrive at the same memory error when I try

np.random.shuffle(X)

Then, I convert the pandas dataframe back to a numpy array and using this function, I can obtain a train test split

#test_proportion of 3 means 1/3 so 33% test and 67% train
def shuffle(matrix, target, test_proportion):
    ratio = int(matrix.shape[0]/test_proportion) #should be int
    X_train = matrix[ratio:,:]
    X_test =  matrix[:ratio,:]
    Y_train = target[ratio:,:]
    Y_test =  target[:ratio,:]
    return X_train, X_test, Y_train, Y_test

X_train, X_test, Y_train, Y_test = shuffle(X, Y, 3)

This works for now, and when I want to do k-fold cross-validation, I can iteratively loop k times and shuffle the pandas dataframe. While this suffices for now, why does numpy and sci-kit learn's implementations of shuffle and train_test_split result in memory errors for big arrays?

like image 94
user1879926 Avatar answered Sep 22 '22 12:09

user1879926