Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Removing files with duplicate content from single directory [Perl, or algorithm]

Tags:

perl

I have a folder with large number of files, some of with have exactly the same contents. I want to remove files with duplicate contents, meaning if two or more files with duplicate content found, I'd like to leave one of these files, and delete the others.

Following is what I came up with, but I don't know if it works :) , didn't try it yet.

How would you do it? Perl or general algorithm.

use strict;
use warnings;

my @files = <"./files/*.txt">;

my $current = 0;

while( $current <= $#files ) {

    # read contents of $files[$current] into $contents1 scalar

    my $compareTo = $current + 1;
    while( $compareTo <= $#files ) {

        # read contents of $files[compareTo] into $contents2 scalar

        if( $contents1 eq $contents2 ) {
            splice(@files, $compareTo, 1);
            # delete $files[compareTo] here
        }
        else {
            $compareTo++;
        }
    }

    $current++;
}
like image 948
flamey Avatar asked Nov 17 '09 07:11

flamey


2 Answers

Here's a general algorithm (edited for efficiency now that I've shaken off the sleepies -- and I also fixed a bug that no one reported)... :)

It's going to take forever (not to mention a lot of memory) if I compare every single file's contents against every other. Instead, why don't we apply the same search to their sizes first, and then compare checksums for those files of identical size.

So then when we md5sum every file (see Digest::MD5) calculate their sizes, we can use a hash table to do our matching for us, storing the matches together in arrayrefs:

use strict;
use warnings;
use Digest::MD5 qw(md5_hex);

my %files_by_size;
foreach my $file (@ARGV)
{
    push @{$files_by_size{-s $file}}, $file;   # store filename in the bucket for this file size (in bytes)
}

Now we just have to pull out the potential duplicates and check if they are the same (by creating a checksum for each, using Digest::MD5), using the same hashing technique:

while (my ($size, $files) = each %files_by_size)
{
    next if @$files == 1;

    my %files_by_md5;
    foreach my $file (@$files_by_md5)
    {
        open my $filehandle, '<', $file or die "Can't open $file: $!";
        # enable slurp mode
        local $/;
        my $data = <$filehandle>;
        close $filehandle;

        my $md5 = md5_hex($data);
        push @{$files_by_md5{$md5}}, $file;       # store filename in the bucket for this MD5
    }

    while (my ($md5, $files) = each %files_by_md5)
    {
        next if @$files == 1;
        print "These files are equal: " . join(", ", @$files) . "\n";
    }
}

-fini

like image 104
Ether Avatar answered Sep 20 '22 15:09

Ether


Perl, with Digest::MD5 module.

use Digest::MD5 ;
%seen = ();
while( <*> ){
    -d and next;
    $filename="$_"; 
    print "doing .. $filename\n";
    $md5 = getmd5($filename) ."\n";    
    if ( ! defined( $seen{$md5} ) ){
        $seen{$md5}="$filename";
    }else{
        print "Duplicate: $filename and $seen{$md5}\n";
    }
}
sub getmd5 {
    my $file = "$_";            
    open(FH,"<",$file) or die "Cannot open file: $!\n";
    binmode(FH);
    my $md5 = Digest::MD5->new;
    $md5->addfile(FH);
    close(FH);
    return $md5->hexdigest;
}

If Perl is not a must and you are working on *nix, you can use shell tools

find /path -type f -print0 | xargs -0 md5sum | \
    awk '($1 in seen){ print "duplicate: "$2" and "seen[$1] } \
         ( ! ($1 in  seen ) ) { seen[$1]=$2 }'
like image 22
ghostdog74 Avatar answered Sep 20 '22 15:09

ghostdog74