Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Most efficient way in Python to iterate over a large file (10GB+)

I'm working on a Python script to go through two files - one containing a list of UUIDs, the other containing a large amount of log entries - each line containing one of the UUIDs from the other file. The purpose of the program is to create a list of the UUIDS from file1, then for each time that UUID is found in the log file, increment the associated value for each time a match is found.

So long story short, count how many times each UUID appears in the log file. At the moment, I have a list which is populated with UUID as the key, and 'hits' as the value. Then another loop which iterates over each line of the log file, and checking if the UUID in the log matches a UUID in the UUID list. If it matches, it increments the value.

    for i, logLine in enumerate(logHandle):         #start matching UUID entries in log file to UUID from rulebase
        if logFunc.progress(lineCount, logSize):    #check progress
            print logFunc.progress(lineCount, logSize)  #print progress in 10% intervals
        for uid in uidHits:
            if logLine.count(uid) == 1:             #for each UUID, check the current line of the log for a match in the UUID list
                uidHits[uid] += 1                   #if matched, increment the relevant value in the uidHits list
                break                                #as we've already found the match, don't process the rest
        lineCount += 1               

It works as it should - but I'm sure there is a more efficient way of processing the file. I've been through a few guides and found that using 'count' is faster than using a compiled regex. I thought reading files in chunks rather than line by line would improve performance by reducing the amount of disk I/O time but the performance difference on a test file ~200MB was neglible. If anyone has any other methods I would be very grateful :)

like image 795
SG84 Avatar asked Jun 02 '11 13:06

SG84


2 Answers

Think functionally!

  1. Write a function which will take a line of the log file and return the uuid. Call it uuid, say.

  2. Apply this function to every line of the log file. If you are using Python 3 you can use the built-in function map; otherwise, you need to use itertools.imap.

  3. Pass this iterator to a collections.Counter.

    collections.Counter(map(uuid, open("log.txt")))
    

This will be pretty much optimally efficient.

A couple comments:

  • This completely ignores the list of UUIDs and just counts the ones that appear in the log file. You will need to modify the program somewhat if you don't want this.

    • Your code is slow because you are using the wrong data structures. A dict is what you want here.
like image 95
Katriel Avatar answered Sep 20 '22 23:09

Katriel


Like folks above have said, with a 10GB file you'll probably hit the limits of your disk pretty quickly. For code-only improvements, the generator advice is great. In python 2.x it'll look something like

uuid_generator = (line.split(SPLIT_CHAR)[UUID_FIELD] for line in file)

It sounds like this doesn't actually have to be a python problem. If you're not doing anything more complex than counting UUIDs, Unix might be able to solve your problems faster than python can.

cut -d${SPLIT_CHAR} -f${UUID_FIELD} log_file.txt | sort | uniq -c 
like image 34
blinsay Avatar answered Sep 16 '22 23:09

blinsay