Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Alternative to Nested Loop For Comparison

I'm currently writing a program that needs to compare each file in an ArrayList of variable size. Right now, the way I'm doing this is through a nested code loop:

         if(tempList.size()>1){
            for(int i=0;i<=tempList.size()-1;i++)
                //Nested loops.  I should feel dirty?
                for(int j=i+1;j<=tempList.size()-1;j++){
                    //*Gets sorted.
                    System.out.println(checkBytes(tempList.get(i), tempList.get(j)));
                }
            }

I've read a few differing opinions on the necessity of nested loops, and I was wondering if anyone had a more efficient alternative.

At a glance, each comparison is going to need to be done, either way, so the performance should be fairly steady, but I'm moderately convinced there's a cleaner way to do this. Any pointers?

EDIT:: This is only a part of the function, for clarity. The files have been compared and put into buckets based on length - after going through the map of the set, and finding a bucket which is greater than one in length, it runs this. So - these are all files of the same size. I will be doing a checksum comparison before I get to bytes as well, but right now I'm just trying to clean up the loop.

Also, holy cow this site responds fast. Thanks, guys.

EDIT2:: Sorry, for further clarification: The file handling part I've got a decent grasp on, I think - first, I compare and sort by length, then by checksum, then by bytes - the issue I have is how to properly deal with needing to compare all files in the ArrayList efficiently, assuming they all need to be compared. If a nested loop is sufficient for this, that's cool, I just wanted to check that this was a suitable method, convention-wise.

like image 774
KGVT Avatar asked Apr 23 '10 22:04

KGVT


2 Answers

My answer to your EDIT2 question is in two parts

The part is that if you have a small number of files, then your nested loop approach should be fine. The performance is O(N**2) and the optimal solution is O(N). However, if N is small enough it won't make much difference which approach you use. You only need to consider an alternative solution if you are sure that N can be large.

The second part spells out an algorithm that exploits file hashes to get an O(N) solution for detecting duplicates. This is what the previous answers alluded to.

  1. Create a FileHash class to represent file hash values. This needs to define equals(Object) and hashCode() methods that implement byte-wise equality of the file hashes.

  2. Create a HashMap<FileHash, List<File>> map instance.

  3. For each File in your input ArrayList:

    1. Calculate the hash for the file, and create a FileHash object for it.
    2. Lookup the FileHash in the map:
    3. If you found an entry, perform a byte-wise comparison of your current file with each of the files in the list you got from the map. If you find a duplicate file in the list, BINGO! Otherwise add current file to the list.
    4. If you didn't find an entry, create a new map entry with the "FileHash` as the key, and the current file as the first element of the value list.

(Note that the map above is really a multi-map, and that there are 3rd party implementations available; e.g. in Apache commons collections and Google collections. I've presented the algorithm in the form above for the sake of simplicity.)

Some performance issues:

  • If you use a good cryptographic hash function to generate your file hashes, then the chances of finding an entry in 3.3 that has more than one element in the list are vanishingly small, and the chances that the byte-wise comparison of the files will not say the files are equal is also vanishingly small. However, the cost of calculating the crypto hash will be greater than the cost of calculating a lower quality hash.

  • If you do use a lower quality hash, you can mitigate the potential cost of comparing more files by looking at the file sizes before you do the byte-wise comparison. If you do that you can make the map type HashMap<FileHash, List<FileTuple>> where FileTuple is a class that holds both a File and its length.

  • You could potentially decrease the cost of hashing by using a hash of just (say) the first block of each file. But that increases the probability that two files may have the same hash but still be different; e.g. in the 2nd block. Whether this is significant depends on the nature of the files. (But for example if you just checksummed the first 256 bytes of a collection of source code files, you could get a huge number of collisions ... due to the presence of identical copyright headers!)

like image 122
Stephen C Avatar answered Oct 19 '22 05:10

Stephen C


A good optimization would be to calculate first all the hashes of the files and then do a single loop over the list.

This basically because you'll have anyway to check each pair of files of your list, but this will imply just a O(1) complexity for each pair instead that calculating a lot of things for each one you are going to check.

You can go something like:

HashSet<YourFile> fileSet = new HashSet<YourFile>();
ArrayList<YourFile> files = new ArrayList<YourFile>();

class YourFile
{
  int hashcode = -1;

  public int hashCode()
  {
     // override it to provide an hashcode based on file contents
     // you can also cache it to avoid recalculating anything

     if (hashcode == -1)
       hashcode = calculateIt();

     return hashcode;
  }
}

// fill up files
files.add(...);

// do comparisons
for (YourFile f : files)
{
  if (fileSet.contains(f))
    // f and fileSet.get(f) are equal: this is a tricky utilization of the hashCode() method so be careful about it!
  else
  {
    fileSet.put(f);
    // since there's not a file with same hashcode you just add this one
  }
}

This will actually drop the inner loop, since when you use hashSet.contains it will check all the already added files, but with an O(1) complexity.

As stated from doublep you have to be careful about performances, since when you plainly check bytes you will stop as soon as you find two different bytes while calculating the hash will need to check the whole file. This will work good when you have many files or when the file are rather small.. the best thing to do would be to benchmark both approaches and see if there are notable differences.

like image 34
Jack Avatar answered Oct 19 '22 04:10

Jack