Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to iterate over two pandas dataframes in chunks

Tags:

python

pandas

For a machine learning task I need to deal with data sets that are too big to fit in my memory all at once, so I need to break it down into chunk. Fortunately, pandas.read_csv has a parameter chunk_size in which you can specify the amount of data that you want to use for analysis and then loop over the data set in chunks with a for loop, which looks like this:

#This example can be found at http://pandas.pydata.org/pandas-docs/dev/io.html

In [120]: reader = pd.read_table('tmp.sv', sep='|', chunksize=4)

In [121]: reader
<pandas.io.parsers.TextFileReader at 0xaa94ad0>

In [122]: for chunk in reader:
   .....:     print(chunk)
   .....: 
   Unnamed: 0         0         1         2         3
0           0  0.469112 -0.282863 -1.509059 -1.135632
1           1  1.212112 -0.173215  0.119209 -1.044236
2           2 -0.861849 -2.104569 -0.494929  1.071804
3           3  0.721555 -0.706771 -1.039575  0.271860
[4 rows x 5 columns]
   Unnamed: 0         0         1         2         3
0           4 -0.424972  0.567020  0.276232 -1.087401
1           5 -0.673690  0.113648 -1.478427  0.524988
2           6  0.404705  0.577046 -1.715002 -1.039268
3           7 -0.370647 -1.157892 -1.344312  0.844885
[4 rows x 5 columns]
   Unnamed: 0         0        1         2         3
0           8  1.075770 -0.10905  1.643563 -1.469388
1           9  0.357021 -0.67460 -1.776904 -0.968914
[2 rows x 5 columns]. 

But I need both the train and test sets in the for loop for my machine learning algorithm to make predictions on the chunks of data, and I don't know how I could do that. I am basically looking for this:

#pseudo code 

result = []
train = pd.read('train_set',chunksize = some_number)

test = pd.read('test_set',chunksize = some_number)
for chunk in train and test:
    result.append(do_machine_learning(train,test))
save_result(result)

update: So I tried Any Hayden's solution but it gave me a new error when I try to access specific parts of the data:

print("getting train set")
train = pd.read_csv(os.path.join(dir,"Train.csv"),chunksize = 200000)
print("getting test set")
test = pd.read_csv(os.path.join(dir,"Test.csv"),chunksize = 200000)
result = []
for chunk in train:
    print("transforming train,test,labels into numpy arrays")
    labels = np.array(train)[:,3]
    train = np.array(train)[:,2]
    test = np.array(test)[:,2]

    print("getting estimator and predictions")
    result.append(stochastic_gradient(train,test))
    print("got everything")
result = np.array(result)

traceback:

Traceback (most recent call last):
  File "C:\Users\Ano\workspace\final_submission\src\rf.py", line 38, in <module>
    main()
  File "C:\Users\Ano\workspace\final_submission\src\rf.py", line 18, in main
    labels = np.array(train)[:,3]
IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index
like image 510
Learner Avatar asked Nov 02 '22 09:11

Learner


1 Answers

In a for loop you have access to the variables in the current scope:

In [11]: a = [1, 2, 3]

In [12]: b = 4

In [13]: for L in a:  # no need to "and b"
             print L, b
1 4
2 4
3 4

Be careful, this means assigning in a for loop overwrites variables:

In [14]: for b in a:
             print b
1
2
3

In [15]: b
Out[15]: 3

To iterate through two iterables at the same time use zip:

In [21]: c = [4, 5, 6]

In [22]: zip(a, c)
Out[22]: [(1, 4), (2, 5), (3, 6)]

In python 2 this is a list, so evaluated in memory (not so in python 3). You can use izip, it's iterator accomplice.

In [23]: from itertools import izip  # in python 3, just use zip

In [24]: for La, Lc in izip(a, c):
             print La, Lb
1 4
2 5
3 6
like image 178
Andy Hayden Avatar answered Nov 08 '22 09:11

Andy Hayden