Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Most efficient way to compute uniqueness (as %) of a file compared to several other, large files

Tags:

file

algorithm

I have about 30 500MB files, one word per line. I have a script that does this, in pseudo-bash:

for i in *; do
    echo "" > everythingButI
    for j in *-except-$i; do
        cat $j >> everythingButI
        sort everythingButI | uniq > tmp
        mv tmp everythingButI
    done
    comm $i everythingButI -2 -3 > uniqueInI

    percentUnique=$(wc -l uniqueInI) / $(wc -l $i) * 100
    echo "$i is $percentUnique% Unique"
done

It computes the 'uniqueness' of each file (the files are already sorted and unique within each file).

So if I had files:

file1    file2   file3
a        b       1
c        c       c
d        e       e
f        g
         h

file1 would be 75% unique (because 1/4 of it's lines are found in another file), file2 would be 60% unique, and file3 would be 33.33% unique. But make it 30 files at 500MB a pop, and it takes a bit to run.

I'd like to write a python script that does this much, much faster, but I'm wondering what the fastest algorithm for this would actually be. (I only have 2GB of RAM on the PC also.)

Anyone have opinions about algorithms, or know of a faster way to do this?

like image 897
Tom Ritter Avatar asked Oct 14 '22 17:10

Tom Ritter


1 Answers

EDIT: Since each of the inputs are already internally sorted and deduplicated, you actually need an n-way merge for this, and the hash-building exercise in the previous version of this post is rather pointless.

The n-way merge is kind of intricate if you're not careful. Basically, it works something like this:

  • Read in the first line of each file, and initialize its unique lines counter and total lines counter to 0.
  • Do this loop body:
    • Find the least value among the lines read.
    • If that value is not the same as the one from any of the other files, increment that file's unique lines counter.
    • For each file, if the least value equals the last value read, read in the next line and increment that file's total lines counter. If you hit end of file, you're done with that file: remove it from further consideration.
  • Loop until you have no files left under consideration. At that point, you should have an accurate unique lines counter and total lines counter for each file. Percentages are then a simple matter of multiplication and division.

I've left out the use of a priority queue that's in the full form of the merge algorithm; that only becomes significant if you have a large enough number of input files.

like image 110
Jeffrey Hantin Avatar answered Nov 15 '22 07:11

Jeffrey Hantin