Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimizing find and replace over large files in Python

I am a complete beginner to Python or any serious programming language for that matter. I finally got a prototype code to work but I think it will be too slow.

My goal is to find and replace some Chinese characters across all files (they are csv) in a directory with integers as per a csv file I have. The files are nicely numbered by year-month, for example 2000-01.csv, and will be the only files in that directory.

I will be looping across about 25 files that are in the neighborhood of 500mb each (and about a million lines). The dictionary I will be using will have about 300 elements and I will be changing unicode (Chinese character) to integers. I tried with a test run and, assuming everything scales up linearly (?), it looks like it would take about a week for this to run.

Thanks in advance. Here is my code (don't laugh!):

# -*- coding: utf-8 -*-

import os, codecs

dir = "C:/Users/Roy/Desktop/test/"

Dict = {'hello' : 'good', 'world' : 'bad'}

for dirs, subdirs, files in os.walk(dir):
    for file in files:
        inFile = codecs.open(dir + file, "r", "utf-8")
        inFileStr = inFile.read()
        inFile.close()
        inFile = codecs.open(dir + file, "w", "utf-8")
        for key in Dict:
            inFileStr = inFileStr.replace(key, Dict[key])
        inFile.write(inFileStr)
        inFile.close()
like image 989
rallen Avatar asked Sep 26 '10 22:09

rallen


3 Answers

In your current code, you're reading the whole file into memory at once. Since they're 500Mb files, that means 500Mb strings. And then you do repeated replacements of them, which means Python has to create a new 500Mb string with the first replacement, then destroy the first string, then create a second 500Mb string for the second replacement, then destroy the second string, et cetera, for each replacement. That turns out to be quite a lot of copying of data back and forth, not to mention using a lot of memory.

If you know the replacements will always be contained in a line, you can read the file line by line by iterating over it. Python will buffer the read, which means it will be fairly optimized. You should open a new file, under a new name, for writing the new file simultaneously. Perform the replacement on each line in turn, and write it out immediately. Doing this will greatly reduce the amount of memory used and the amount of memory copied back and forth as you do the replacements:

for file in files:
    fname = os.path.join(dir, file)
    inFile = codecs.open(fname, "r", "utf-8")
    outFile = codecs.open(fname + ".new", "w", "utf-8")
    for line in inFile:
        newline = do_replacements_on(line)
        outFile.write(newline)
    inFile.close()
    outFile.close()
    os.rename(fname + ".new", fname)

If you can't be certain if they'll always be on one line, things get a little harder; you'd have to read in blocks manually, using inFile.read(blocksize), and keep careful track of whether there might be a partial match at the end of the block. Not as easy to do, but usually still worth it to avoid the 500Mb strings.

Another big improvement would be if you could do the replacements in one go, rather than trying a whole bunch of replacements in order. There are several ways of doing that, but which fits best depends entirely on what you're replacing and with what. For translating single characters into something else, the translate method of unicode objects may be convenient. You pass it a dict mapping unicode codepoints (as integers) to unicode strings:

>>> u"\xff and \ubd23".translate({0xff: u"255", 0xbd23: u"something else"})
u'255 and something else'

For replacing substrings (and not just single characters), you could use the re module. The re.sub function (and the sub method of compiled regexps) can take a callable (a function) as the first argument, which will then be called for each match:

>>> import re
>>> d = {u'spam': u'spam, ham, spam and eggs', u'eggs': u'saussages'}
>>> p = re.compile("|".join(re.escape(k) for k in d))
>>> def repl(m):
...     return d[m.group(0)]
...
>>> p.sub(repl, u"spam, vikings, eggs and vikings")
u'spam, ham, spam and eggs, vikings, saussages and vikings'
like image 58
Thomas Wouters Avatar answered Oct 26 '22 23:10

Thomas Wouters


I think you can lower memory use greatly (and thus limit swap use and make things faster) by reading a line at a time and writing it (after the regexp replacements already suggested) to a temporary file - then moving the file to replace the original.

like image 45
Radomir Dopieralski Avatar answered Oct 27 '22 00:10

Radomir Dopieralski


A few things (unrelated to the optimization problem):

dir + file should be os.path.join(dir, file)

You might want to not reuse infile, but instead open (and write to) a separate outfile. This also won't increase performance, but is good practice.

I don't know if you're I/O bound or cpu bound, but if your cpu utilization is very high, you may want to use threading, with each thread operating on a different file (so with a quad core processor, you'd be reading/writing 4 different files simultaneously).

like image 39
babbitt Avatar answered Oct 27 '22 00:10

babbitt