For a upcoming programming competition I solved a few of the tasks of former competitions. Each task looks like this: We get a bunch of in-files (each containing 1 line of numbers and strings, f.e. "2 15 test 23 ..."), and we have to build a program and return some computed values.
These in-files can be quite large: for instance 10 MB. My code is the following:
with open(filename) as f:
input_data = f.read().split()
This is quite slow. I quess mostly because of the split method. Is there a faster way?
What you have already looks like the best way for plain text IO on a one-line file.
10 MB of plain text is fairly large, if you need some more speedup you could consider pickling the data in a binary format instead of a plain text format. Or if it is very repetitive data, you could store it compressed.
If one of your input files contains independent tasks (that is, you can work on a couple of tokens of the line at a time, without knowing tokens further ahead), you can do reading and processing in lockstep, by simpy not reading the whole file at once.
def read_groups(f):
chunksize= 4096 #how many bytes to read from the file at once
buf= f.read(chunksize)
while buf:
if entire_group_inside(buf): #checks if you have enough data to process on buf
i= next_group_index(buf) #returns the index on the next group of tokens
group, buf= buf[:i], buf[i:]
yield group
else:
buf+= f.read(chunksize)
with open(filename) as f:
for data in read_groups(f):
#do something
This has some advantages:
If you don't have much processing, though, your task is fundamentally I/O bound, and there isn't much you can do to speed it up as it stands, as wim said - other than rethinking your input data format
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With