Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How should I delete hash elements while iterating?

Tags:

hash

perl

I have fairly large hash (some 10M keys) and I would like to delete some elements from it.

I usually don't like to use delete or splice, and I wind up copying what I want instead of deleting what I don't. But this time, since the hash is really large, I think I'd like to delete directly from it.

So I'm doing something like this:

foreach my $key (keys %hash) {
 if (should_be_deleted($key)) {
  delete($hash{$key});
 }
}

And it seems to work OK. But.. what if I'd like to delete some elements even before iterating on them? I'll explain by example:

foreach my $key (keys %hash) {
 if (should_be_deleted($key)) {
  delete($hash{$key});
  # if $key should be deleted, so does "$key.a", "kkk.$key" and some other keys
  # I already know to calculate. I would like to delete them now...
 }
}

I thought of some possible solutions - like checking if a key still exists as the first step in the loop or first looping and creating a list of keys to delete (without actually deleting them), then actually deleting in another loop.

What are your thought regarding this?

UPDATE

It's seems that the double-pass approach has a consensus. However, it is quite inefficient in the sense that during the first pass I double-check keys that were already marked for deletion. This is kinda recursive, because not only I check the key, I also calculate the other keys that should be deleted, although they were already calculated by the original key.

Perhaps I need to use some more dynamic data structure for iterating over the keys, that will updated dynamically?

like image 459
David B Avatar asked Oct 21 '10 15:10

David B


3 Answers

I recommend doing two passes because it's more robust. Hash order is effectively random, so there are no guarantees that you'll see the "primary" keys before the related ones. For example, if should_be_deleted() only detects the primary keys that aren't wanted and the related ones are calculated, you could end up processing unwanted data. A two-pass approach avoids this issue.

my @unwanted;
foreach my $key (keys %hash) {
    if (should_be_deleted($key)) {
         push @unwanted, $key;
         # push any related keys onto @unwanted
    }
}

delete @hash{@unwanted};

foreach my $key (keys %hash) {
    # do something
}
like image 99
Michael Carman Avatar answered Nov 04 '22 00:11

Michael Carman


How about this:

my %to_delete;

foreach my $key (keys %hash) {
    if (should_be_deleted($key)) {
        $to_delete{$key}++;
    }
    # add some other keys the same way...
}

delete @hash{keys %to_delete};
like image 4
Eugene Yarmash Avatar answered Nov 04 '22 00:11

Eugene Yarmash


You can mark the hash elements to be deleted by setting their values to undef. That avoids wasting space on a separate list of keys to be deleted, as well as avoiding the checks on elements already marked for deletion. And it would also be less wasteful to use each instead of for, which builds a list of all the hash keys before starting to iterate the loop

Like this

while ( my ($key, $val) = each %hash ) {

    next unless defined $val and should_be_deleted($key);

    $hash{$key}       = undef;
    $hash{$key.'a'}   = undef;
    $hash{'kkk'.$key} = undef;
}

while ( my ($key, $val) = each %hash ) {
    delete $hash{$key} unless defined $val;
}
like image 3
Borodin Avatar answered Nov 04 '22 01:11

Borodin