There are some surprisingly good image compare tools which find similar image even if it's not exactly the same (eg. change in size, wallpaper, brightness/contrast). I have some example applications here:
I only tried the first one, but all of them are developed for Windows and are not open source. Unique Filer was released in 2000 and the homepage seems to have disappeared. It was surprisingly fast (even on computers from that year) because it used an index and comparing some 10000 images using the index needed only some few seconds (and updating the index was a scalable process).
Since this algorithm in a very effective form already exists for at least 15 years, I assume it is well-documented and possibly already implemented as an open source library. Does anyone knows more about which algorithm or image detection theory was used to implement this applications? Maybe there is even a open source implementation of it available?
I already checked the question Algorithm for finding similar images but all of it's answers solve the problem by comparing one image to another. For 1000+ images this will result in 1000^2 comparing operations which is just not what I'm looking for.
Image Similarity The similarity of the two images is detected using the package “imagehash”. If two images are identical or almost identical, the imagehash difference will be 0. Two images are more similar if the imagehash difference is closer to 0.
In image processing, algorithms are used to identify and detect various vital components or desired parts and features of the image. Commonly used features in medical imaging can be categorized into: • Intensity-based such as first and second order statistics.
Google Image Algorithm works either by scouring for related images that are based on the search query the user enters or by analyzing the image you upload through Google's reverse image search, AKA, the “Search By Image” option.
The problem you are describing is more generally called Nearest Neighbor Search. Since you are asking for high efficiency on large datasets, Approximated Nearest Neighbor Search is what you are after.
An efficient technique for this is Locality-Sensitive Hashing (LSH), for which these slides give a great overview. Its basic idea is the use of hashing functions which project all data to a low-dimensional space, with the constraint that the hash of similar data collides with a high probability and dissimilar data collides with low probability. These probabilities are parameters to the algorithm, with which the trade-off between accuracy and efficiency can be changed.
LSHKIT is an open-source implementation of LSH.
Meanwhile, I analyzed the algorithm of UniqueFiler:
First, it reduces all images to 10x10 pixel grayscale images (likely without using interpolation)
Probably based on the brightness of the 4 quadrants, some rotation is done (this step is dangerous because it sometimes 'overlooks' similarities if images are too symmetric)
The image brightness range is fully extended (brightest -> white, darkest -> black) and then reduced to 2 bit (4 values) per pixel
The values get stored as arrays of 100 bytes per image (plus file metadata)
... is done one-by-one (two nested loops over the whole database plus a third for the 100 bytes). Today, we would probably index the sorted sums of all 4 quadrants for a fast pre-selection of similar candidates.
The comparison is done byte-by-byte by difference between each two bytes, weighted but less than the square. The sum of these 100 results is the final difference between two images.
I have more detailed information a home. If I find the time, I will add them to this answer. I found this after I discovered that the database format is actually a gzipped file without header, containing fixed-sized records per image
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With