Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Improve speed of reading and converting from binary file?

I know there have been some questions regarding file reading, binary data handling and integer conversion using struct before, so I come here to ask about a piece of code I have that I think is taking too much time to run. The file being read is a multichannel datasample recording (short integers), with intercalated intervals of data (hence the nested for statements). The code is as follows:

# channel_content is a dictionary, channel_content[channel]['nsamples'] is a string
for rec in xrange(number_of_intervals)):
    for channel in channel_names:
        channel_content[channel]['recording'].extend(
            [struct.unpack( "h", f.read(2))[0]
            for iteration in xrange(int(channel_content[channel]['nsamples']))])

With this code, I get 2.2 seconds per megabyte read with a dual-core with 2Mb RAM, and my files typically have 20+ Mb, which gives some very annoying delay (specially considering another benchmark shareware program I am trying to mirror loads the file WAY faster).

What I would like to know:

  1. If there is some violation of "good practice": bad-arranged loops, repetitive operations that take longer than necessary, use of inefficient container types (dictionaries?), etc.
  2. If this reading speed is normal, or normal to Python, and if reading speed
  3. If creating a C++ compiled extension would be likely to improve performance, and if it would be a recommended approach.
  4. (of course) If anyone suggests some modification to this code, preferrably based on previous experience with similar operations.

Thanks for reading

(I have already posted a few questions about this job of mine, I hope they are all conceptually unrelated, and I also hope not being too repetitive.)

Edit: channel_names is a list, so I made the correction suggested by @eumiro (remove typoed brackets)

Edit: I am currently going with Sebastian's suggestion of using array with fromfile() method, and will soon put the final code here. Besides, every contibution has been very useful to me, and I very gladly thank everyone who kindly answered.

Final Form after going with array.fromfile() once, and then alternately extending one array for each channel via slicing the big array:

fullsamples = array('h')
fullsamples.fromfile(f, os.path.getsize(f.filename)/fullsamples.itemsize - f.tell())
position = 0
for rec in xrange(int(self.header['nrecs'])):
    for channel in self.channel_labels:
        samples = int(self.channel_content[channel]['nsamples'])
        self.channel_content[channel]['recording'].extend(
                                                fullsamples[position:position+samples])
        position += samples

The speed improvement was very impressive over reading the file a bit at a time, or using struct in any form.

like image 484
heltonbiker Avatar asked Apr 27 '11 12:04

heltonbiker


People also ask

Is reading a binary file is faster than reading a text file?

The ratios are in the last two rows. So the answer is that it is between 14 and 62 times faster to write binary versus text.

Which is faster text file and binary file?

Answer: A binary file is usually very much smaller than a text file that contains an equivalent amount of data. I/O with smaller files is faster, too, since there are fewer bytes to move.

Are binary files faster?

Input and output are much faster using binary data. Converting a 32-bit integer to characters takes time.

Are binary files more efficient than text files?

Text files are used to store data more user friendly. Binary files are used to store data more compactly. In the text file, a special character whose ASCII value is 26 inserted after the last character to mark the end of file.


1 Answers

A single array fromfile call is definitively fastest, but wont work if the dataseries is interleaved with other value types.

In such cases, another big speedincrease that can be combined with the previous struct answers, is that instead of calling the unpack function multiple times, precompile a struct.Struct object with the format for each chunk. From the docs:

Creating a Struct object once and calling its methods is more efficient than calling the struct functions with the same format since the format string only needs to be compiled once.

So for instance, if you wanted to unpack 1000 interleaved shorts and floats at a time, you could write:

chunksize = 1000
structobj = struct.Struct("hf" * chunksize)
while True:
    chunkdata = structobj.unpack(fileobj.read(structobj.size))

(Note that the example is only partial and needs to account for changing the chunksize at the end of the file and breaking the while loop.)

like image 172
Karim Bahgat Avatar answered Oct 06 '22 15:10

Karim Bahgat