Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I combine large csv files in python?

Tags:

python

pandas

csv

I have 18 csv files, each is approximately 1.6Gb and each contain approximately 12 million rows. Each file represents one years' worth of data. I need to combine all of these files, extract data for certain geographies, and then analyse the time series. What is the best way to do this?

I have tired using pd.read_csv but i hit a memory limit. I have tried including a chunk size argument but this gives me a TextFileReader object and I don't know how to combine these to make a dataframe. I have also tried pd.concat but this does not work either.

like image 973
ChrisB Avatar asked Jun 07 '19 12:06

ChrisB


3 Answers

Here is the elegant way of using pandas to combine a very large csv files. The technique is to load number of rows (defined as CHUNK_SIZE) to memory per iteration until completed. These rows will be appended to output file in "append" mode.

import pandas as pd

CHUNK_SIZE = 50000
csv_file_list = ["file1.csv", "file2.csv", "file3.csv"]
output_file = "./result_merge/output.csv"

for csv_file_name in csv_file_list:
    chunk_container = pd.read_csv(csv_file_name, chunksize=CHUNK_SIZE)
    for chunk in chunk_container:
        chunk.to_csv(output_file, mode="a", index=False)

But If your files contain headers than it makes sense to skip the header in the upcoming files except the first one. As repeating header is unexpected. In this case the solution is as the following:

import pandas as pd

CHUNK_SIZE = 50000
csv_file_list = ["file1.csv", "file2.csv", "file3.csv"]
output_file = "./result_merge/output.csv"

first_one = True
for csv_file_name in csv_file_list:

    if not first_one: # if it is not the first csv file then skip the header row (row 0) of that file
        skip_row = [0]
    else:
        skip_row = []

    chunk_container = pd.read_csv(csv_file_name, chunksize=CHUNK_SIZE, skiprows = skip_row)
    for chunk in chunk_container:
        chunk.to_csv(output_file, mode="a", index=False)
    first_one = False
like image 157
Nguyen Van Duc Avatar answered Sep 20 '22 10:09

Nguyen Van Duc


The memory limit is hit because you are trying to load the whole csv in memory. An easy solution would be to read the files line by line (assuming your files all have the same structure), control it, then write it to the target file:

filenames = ["file1.csv", "file2.csv", "file3.csv"]
sep = ";"

def check_data(data):
    # ... your tests
    return True # << True if data should be written into target file, else False

with open("/path/to/dir/result.csv", "a+") as targetfile:
    for filename in filenames :
        with open("/path/to/dir/"+filename, "r") as f:
            next(f) # << only if the first line contains headers
            for line in f:
                data = line.split(sep)
                if check_data(data):
                    targetfile.write(line)

Update: An example of the check_data method, following your comments:

def check_data(data):
    return data[n] == 'USA' # < where n is the column holding the country
like image 41
olinox14 Avatar answered Sep 19 '22 10:09

olinox14


You can convert the TextFileReader object using pd.DataFrame like so: df = pd.DataFrame(chunk), where chunk is of type TextFileReader. You can then use pd.concat to concatenate the individual dataframes.

like image 44
Vishnu Dasu Avatar answered Sep 22 '22 10:09

Vishnu Dasu