I read a huge file in chunks using python. Then I apply a regex on that chunk. Based on an identifier tag, I want to extract the corresponding value. Due to the chunk size, data is missing at the chunk boundaries.
Requirements:
Python code example
identifier_pattern = re.compile(r'Identifier: (.*?)\n')
with open('huge_file', 'r') as f:
data_chunk = f.read(1024*1024*1024)
m = re.findall(identifier_pattern, data_chunk)
Chunk data examples
Good: number of tags equivalent to number of values
Identifier: value
Identifier: value
Identifier: value
Identifier: value
Due to the chunk size, you get varying boundary issues as listed below. The third identifier returns an incomplete value, "v" instead of "value". The next chunk contains "alue". This causes missing data after parsing.
Bad: identifier value incomplete
Identifier: value
Identifier: value
Identifier: v
How do you solve chunk boundary issues like this?
Assuming this is your exact problem you could probably just adapt your regex and read line by line (which won't load the full file into memory):
import re
matches = []
identifier_pattern = re.compile(r'Identifier: (.*?)$')
with open('huge_file') as f:
for line in f:
matches += re.findall(identifier_pattern, line)
print("matches", matches)
You can control chunk forming and have it close to 1024 * 1024 * 1024, in that case you avoid missing parts:
import re
identifier_pattern = re.compile(r'Identifier: (.*?)\n')
counter = 1024 * 1024 * 1024
data_chunk = ''
with open('huge_file', 'r') as f:
for line in f:
data_chunk = '{}{}'.format(data_chunk, line)
if len(data_chunk) > counter:
m = re.findall(identifier_pattern, data_chunk)
print m.group()
data_chunk = ''
# Analyse last chunk of data
m = re.findall(identifier_pattern, data_chunk)
print m.group()
Alternativelly, you can go two times over same file with different starting point of read
(first time from: 0, second time from max length of matched string collected during first iteration), store results as dictionaries, where key=[start position of matched string in file]
, that position would be same for each iteration, so it shall not be a problem to merge results, however I think it would be more accurate to do merge by start position and length of matched string.
Good Luck !
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With