I have very large datasets that are stored in binary files on the hard disk. Here is an example of the file structure:
File Header
149 Byte ASCII Header
Record Start
4 Byte Int - Record Timestamp
Sample Start
2 Byte Int - Data Stream 1 Sample
2 Byte Int - Data Stream 2 Sample
2 Byte Int - Data Stream 3 Sample
2 Byte Int - Data Stream 4 Sample
Sample End
There are 122,880 Samples per Record and 713 Records per File. This yields a total size of 700,910,521 Bytes. The sample rate and number of records does vary sometimes so I have to code for detection of the number of each per file.
Currently the code I use to import this data into arrays works like this:
from time import clock
from numpy import zeros , int16 , int32 , hstack , array , savez
from struct import unpack
from os.path import getsize
start_time = clock()
file_size = getsize(input_file)
with open(input_file,'rb') as openfile:
input_data = openfile.read()
header = input_data[:149]
record_size = int(header[23:31])
number_of_records = ( file_size - 149 ) / record_size
sample_rate = ( ( record_size - 4 ) / 4 ) / 2
time_series = zeros(0,dtype=int32)
t_series = zeros(0,dtype=int16)
x_series = zeros(0,dtype=int16)
y_series = zeros(0,dtype=int16)
z_series = zeros(0,dtype=int16)
for record in xrange(number_of_records):
time_stamp = array( unpack( '<l' , input_data[ 149 + (record * record_size) : 149 + (record * record_size) + 4 ] ) , dtype = int32 )
unpacked_record = unpack( '<' + str(sample_rate * 4) + 'h' , input_data[ 149 + (record * record_size) + 4 : 149 + ( (record + 1) * record_size ) ] )
record_t = zeros(sample_rate , dtype=int16)
record_x = zeros(sample_rate , dtype=int16)
record_y = zeros(sample_rate , dtype=int16)
record_z = zeros(sample_rate , dtype=int16)
for sample in xrange(sample_rate):
record_t[sample] = unpacked_record[ ( sample * 4 ) + 0 ]
record_x[sample] = unpacked_record[ ( sample * 4 ) + 1 ]
record_y[sample] = unpacked_record[ ( sample * 4 ) + 2 ]
record_z[sample] = unpacked_record[ ( sample * 4 ) + 3 ]
time_series = hstack ( ( time_series , time_stamp ) )
t_series = hstack ( ( t_series , record_t ) )
x_series = hstack ( ( x_series , record_x ) )
y_series = hstack ( ( y_series , record_y ) )
z_series = hstack ( ( z_series , record_z ) )
savez(output_file, t=t_series , x=x_series ,y=y_series, z=z_series, time=time_series)
end_time = clock()
print 'Total Time',end_time - start_time,'seconds'
This currently takes about 250 seconds per 700 MB file, which to me seems very high. Is there a more efficient way I could do this?
Using the numpy fromfile method with a custom dtype cut the runtime to 9 seconds, 27x faster than the original code above. The final code is below.
from numpy import savez, dtype , fromfile
from os.path import getsize
from time import clock
start_time = clock()
file_size = getsize(input_file)
openfile = open(input_file,'rb')
header = openfile.read(149)
record_size = int(header[23:31])
number_of_records = ( file_size - 149 ) / record_size
sample_rate = ( ( record_size - 4 ) / 4 ) / 2
record_dtype = dtype( [ ( 'timestamp' , '<i4' ) , ( 'samples' , '<i2' , ( sample_rate , 4 ) ) ] )
data = fromfile(openfile , dtype = record_dtype , count = number_of_records )
time_series = data['timestamp']
t_series = data['samples'][:,:,0].ravel()
x_series = data['samples'][:,:,1].ravel()
y_series = data['samples'][:,:,2].ravel()
z_series = data['samples'][:,:,3].ravel()
savez(output_file, t=t_series , x=x_series ,y=y_series, z=z_series, fid=time_series)
end_time = clock()
print 'It took',end_time - start_time,'seconds'
Some hints:
Don't use the struct module. Instead, use Numpy's structured data types and fromfile
. Check here: http://scipy-lectures.github.com/advanced/advanced_numpy/index.html#example-reading-wav-files
You can read all of the records at once, by passing in a suitable count= to fromfile
.
Something like this (untested, but you get the idea):
import numpy as np file = open(input_file, 'rb') header = file.read(149) # ... parse the header as you did ... record_dtype = np.dtype([ ('timestamp', '<i4'), ('samples', '<i2', (sample_rate, 4)) ]) data = np.fromfile(file, dtype=record_dtype, count=number_of_records) # NB: count can be omitted -- it just reads the whole file then time_series = data['timestamp'] t_series = data['samples'][:,:,0].ravel() x_series = data['samples'][:,:,1].ravel() y_series = data['samples'][:,:,2].ravel() z_series = data['samples'][:,:,3].ravel()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With