Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memory/performance tradeoff when determining the size of a Perl hash

Tags:

perl

I was browsing through some Perl code in a popular repositiory on GitHub and ran across this method to calculate the size of a hash:

while ( my ($a, undef ) = each %h ) { $num++; }

I thought why would one go through the trouble of writing all that code when it could more simply be written as

$num = scalar keys %h;

So, I compared both methods with Benchmark.

my %h = (1 .. 1000);
    cmpthese(-10, {
        keys      => sub {
                             my $num  = 0;
                             $num = scalar keys %h;
                         },
        whileloop => sub {
                             my $num = 0;
                             while ( my ($a, undef ) = each %h ) {
                                 $num++;
                             }
                         },
});
 RESULTS
                Rate whileloop      keys
 whileloop    5090/s        --     -100%
 keys      7234884/s   142047%        --

The results show that using keys is MUCH faster than the while loop. My question is this: why would the original coder use such a slow method? Is there something that I'm missing? Also, is there a faster way?

like image 701
pcantalupo Avatar asked Mar 01 '15 22:03

pcantalupo


2 Answers

I cannot read the mind of whomever might have written that piece of code, but he/she likely thought:

my $n = keys %hash;

used more memory than iterating through everything using each.

Note that the scalar on the left hand side of the assignment creates scalar context: There is no need for scalar unless you want to create a scalar context in what would otherwise have been list context.

like image 134
Sinan Ünür Avatar answered Nov 14 '22 21:11

Sinan Ünür


Because he didn't know about keys's ability to return the number of elements in the hash.

like image 43
ikegami Avatar answered Nov 14 '22 23:11

ikegami