We've got a mature body of code that loads data from files into a database. There are several file formats; they are all fixed-width fields.
Part of the code uses the Perl unpack()
function to read fields from the input data into package variables.
Business logic is then able to refer to these fields in a 'human-readable' way.
The file reading code is generated from a format description once, prior to reading a file.
In sketch form, the generated code looks like this:
while ( <> ) {
# Start of generated code.
# Here we unpack 2 fields, real code does around 200.
( $FIELDS::transaction_date, $FIELDS::customer_id ) = unpack q{A8 A20};
# Some fields have leading space removed
# Generated code has one line like this per affected field.
$FIELDS::customer_id =~ s/^\s+//;
# End of generated code.
# Then we apply business logic to the data ...
if ( $FIELDS::transaction_date eq $today ) {
push @fields, q{something or other};
}
# Write to standard format for bulk load to the database.
print $fh join( '|', @fields ) . q{\n} or die;
}
Profiling the code reveals that around 35% of the time is spent in the unpack and leading-space strip. The remaining time is spent in validating and transforming the data, and writing to the output file.
It appears that there is no single part of the business logic that takes more than 1-2% of the run time.
The question is - can we eke out a bit more speed from the unpacking and space stripping somehow? Preferably without having to refactor all the code that refers to the FIELDS package variables.
EDIT:
In case it makes a difference
$ perl -v
This is perl, v5.8.0 built for PA-RISC1.1
I've actually dealt with this problem over and over again. Unpack is better than substr.
As far as stripping spaces goes, you're pretty much screwed. That regex hack is the "official" way to do it. You might be able to gain some efficiency by refining your unpack statements (assuming no data is longer than 4 digits, why unpack the full 12 digits of the field?), but otherwise, the parsing is just a p.i.t.a.
Good luck with your flat data. Fricking legacy junk, how I hates it.
Are you sure that you are processor bound on this task? The computation is simple enough to suspect that the whole process is likely to be I/O bound. In which case optimizing for faster unpack won't gain you much time.
In case you're in fact processor bound, the problem as described seems pretty much parallelizable, but of course the devil is in the details of your business computation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With