Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: regex match across file chunk boundaries

Huge plain-text data file

I read a huge file in chunks using python. Then I apply a regex on that chunk. Based on an identifier tag, I want to extract the corresponding value. Due to the chunk size, data is missing at the chunk boundaries.

Requirements:

  • The file must be read in chunks.
  • The chunk sizes must be smaller than or equal to 1 GiB.


Python code example

identifier_pattern = re.compile(r'Identifier: (.*?)\n')
with open('huge_file', 'r') as f:
    data_chunk = f.read(1024*1024*1024)
    m = re.findall(identifier_pattern, data_chunk)


Chunk data examples

Good: number of tags equivalent to number of values

Identifier: value
Identifier: value
Identifier: value
Identifier: value


Due to the chunk size, you get varying boundary issues as listed below. The third identifier returns an incomplete value, "v" instead of "value". The next chunk contains "alue". This causes missing data after parsing.

Bad: identifier value incomplete

Identifier: value
Identifier: value
Identifier: v


How do you solve chunk boundary issues like this?

like image 794
JodyK Avatar asked May 27 '17 01:05

JodyK


2 Answers

Assuming this is your exact problem you could probably just adapt your regex and read line by line (which won't load the full file into memory):

import re
matches = []
identifier_pattern = re.compile(r'Identifier: (.*?)$')
with open('huge_file') as f:
    for line in f:
        matches += re.findall(identifier_pattern, line)

print("matches", matches)
like image 53
Jack Avatar answered Nov 25 '22 17:11

Jack


You can control chunk forming and have it close to 1024 * 1024 * 1024, in that case you avoid missing parts:

import re


identifier_pattern = re.compile(r'Identifier: (.*?)\n')
counter = 1024 * 1024 * 1024
data_chunk = ''
with open('huge_file', 'r') as f:
    for line in f:
        data_chunk = '{}{}'.format(data_chunk, line)
        if len(data_chunk) > counter:
            m = re.findall(identifier_pattern, data_chunk)
            print m.group()
            data_chunk = ''
    # Analyse last chunk of data
    m = re.findall(identifier_pattern, data_chunk)
    print m.group()

Alternativelly, you can go two times over same file with different starting point of read (first time from: 0, second time from max length of matched string collected during first iteration), store results as dictionaries, where key=[start position of matched string in file], that position would be same for each iteration, so it shall not be a problem to merge results, however I think it would be more accurate to do merge by start position and length of matched string.

Good Luck !

like image 26
Andriy Ivaneyko Avatar answered Nov 25 '22 17:11

Andriy Ivaneyko