Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to efficiently join/merge/concatenate large data frame in pandas?

Tags:

python

pandas

The aim is to create a big data frame on which I can them perform operations such as average each row across the columns etc.

The problem is that as the data frame increases, the speed for each iteration increases as well, so I cannot finish my computation.

Notes: my df has only two columns, where col1 is unnecessary, hence why I join on it. col1 is a string and col2 is a float. The number of rows is 3k. Below is an example:

folder_paths    float
folder/Path     1.12630137
folder/Path2    1.067517426
folder/Path3    1.06443264
folder/Path4    1.049119625
folder/Path5    1.039635769

Question Any ideas on how this code can be made more efficient and where are the bottlenecks? Also, I am unsure if merge is the way to go.

Current ideas One solution that I was considering was to per-allocate the memory and specify the column types: col1 is a string and col2 is a float.

df = pd.DataFrame() # create an empty data frame

for i in range(1000):
    if i is 0:
        df = generate_new_df(arg1, arg2)
    else:
        df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')

I have also tried to use pd.concat as well, but the results are very similar: an increase in time after each iteration

df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)

result with pd.concat

run 1
time 0.34s
run 2    
time 0.34s
run 3    
time 0.32s
run 4    
time 0.33s
run 5    
time 0.42s
run 6    
time 0.41s
run 7    
time 0.45s
run 8    
time 0.46s
run 9    
time 0.54s
run 10   
time 0.58s
run 11   
time 0.73s
run 12   
time 0.72s
run 13   
time 0.79s
run 14   
time 0.87s
run 15   
time 0.95s
run 16   
time 1.06s
run 17   
time 1.19s
run 18   
time 1.24s
run 19   
time 1.37s
run 20   
time 1.57s
run 21   
time 1.68s
run 22   
time 1.93s
run 23   
time 1.86s
run 24   
time 1.96s
run 25   
time 2.11s
run 26   
time 2.32s
run 27   
time 2.42s
run 28   
time 2.57s

Using a dfList and pd.concat of the list yielded similar results. Below is the code & results.

dfList=[]
for i in range(1000):
    dfList.append(generate_new_df(arg1, arg2))

df = pd.concat(dfList, axis=1)

Results:

run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.
like image 519
Newskooler Avatar asked Jul 20 '17 14:07

Newskooler


1 Answers

It is still a little unclear exactly what your problem is but I'm going to assume that the main bottleneck is that you are trying to load lots of dataframes into a list all at once and you're running into memory/paging issues. With this is mind, here is an approach which might help but you will have to test it yourself since I don't have access to your generate_new_df function or your data.

The approach is to use a variation on the merge_with_concat function from this answer, and merge smaller numbers of your dataframes together initially and then merge them all together at once.

For example, if you have 1000 dataframes, you can merge 100 together at a time to give you 10 big dataframes and then merge those final 10 together as a last step. This should ensure that you don't have a list of dataframes that is too big at any one point.

You can use the two functions below (I'm assuming your generate_new_df function takes a file name as one of its arguments) and do something like:

def chunk_dfs(file_names, chunk_size):
    """" yields n dataframes at a time where n == chunksize """
    dfs = []
    for f in file_names:
        dfs.append(generate_new_df(f))
        if len(dfs) == chunk_size:
            yield dfs
            dfs  = []
    if dfs:
        yield dfs


def merge_with_concat(dfs, col):                                             
    dfs = (df.set_index(col, drop=True) for df in dfs)
    merged = pd.concat(dfs, axis=1, join='outer', copy=False)
    return merged.reset_index(drop=False)

col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100

merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)
like image 69
bunji Avatar answered Oct 17 '22 21:10

bunji