Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the best way to loop through and process a large (10GB+) text file?

I need to loop through a very large text file, several gigabytes in size (a zone file to be exact). I need to run a few queries for each entry in the zone file, and then store the results in a searchable database.

My weapons of choice at the moment, mainly because I know them, are Python and MySQL. I'm not sure how well either will deal with files of this size, however.

Does anyone with experience in this area have any suggestions on the best way to open and loop through the file without overloading my system? How about the most efficient way to process the file once I can open it (threading?) and store the processed data?

like image 882
MarathonStudios Avatar asked Nov 02 '11 04:11

MarathonStudios


1 Answers

You shouldn't have any real trouble storing that amount of data in MySQL, although you will probably not be able to store the entire database in memory, so expect some IO performance issues. As always, make sure you have the appropriate indices before running your queries.

The most important thing is to not try to load the entire file into memory. Loop through the file, don't try to use a method like readlines which will load the whole file at once.

Make sure to batch the requests. Load up a few thousand lines at a time and send them all in one big SQL request.

This approach should work:

def push_batch(batch):
    # Send a big INSERT request to MySQL

current_batch = []
with open('filename') as f:
    for line in f:
        batch.append(line)

        if len(current_batch) > 1000:
            push_batch(current_batch)
            current_batch = []

    push_batch(current_batch)

Zone files are pretty normally formatted, consider if you can get away with just using LOAD DATA INFILE. You might also consider creating a named pipe, pushing partially formatted data in to it from python, and using LOAD DATA INFILE to read it in with MySQL.

MySQL has some great tips on optimizing inserts, some highlights:

  • Use multiple value lists in each insert statement.
  • Use INSERT DELAYED, particularly if you are pushing from multiple clients at once (e.g. using threading).
  • Lock your tables before inserting.
  • Tweak the key_buffer_size and bulk_insert_buffer_size.

The fastest processing will be done in MySQL, so consider if you can get away with doing the queries you need after the data is in the db, not before. If you do need to do operations in Python, threading is not going to help you. Only one thread of Python code can execute at a time (GIL), so unless you're doing something which spends a considerable amount of time in C, or interfaces with external resources, you're only going to ever be running in one thread anyway.

The most important optimization question is what is bounding the speed, there's no point spinning up a bunch of threads to read the file, if the database is the bounding factor. The only way to really know is to try it and make tweaks until it is fast enough for your purpose.

like image 147
Zack Bloom Avatar answered Sep 26 '22 16:09

Zack Bloom