Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Near-duplicate video detection [closed]

I'm looking for an open source project that able to solve near-duplicate video detection problem. The best, that I've found now it's SOTU, but its source are closed. So, is there any open source solutions?

Also, I will be very grateful for some links on theoretical part of this problem.

like image 947
Nelson Tatius Avatar asked Dec 20 '12 15:12

Nelson Tatius


1 Answers

Here is one project on near-duplicates: INDetector by DVMM Lab, U Columbia (source-available, not exactly open source I think). There's also some info on applying that to video (mainly on keyframes).

There is also pHash, an open-source "perceptual hash" library for images.

There is also IMMI, an open-source image mining plugin for RapidMiner.

Any of these can be applied to video as well as images by treating either all frames or selected frames (for example keyframes) as inputs to the algorithm, and then aggregating the results for similarity of pairs of frames from two different clips.

You might also try to get in touch with the authors of UQLIPS (Shen et al, cited below).

Also, look into the list of entries into TRECVID, some years had near-duplicate detection as one of the tasks, and you might be able to get in touch with some of these groups and get the software.

If you would like to pursue this yourself, implementing a prototype of any of the published algorithms should be fairly easy. I recommend (a) try a number of simple algorithms on the data that interests you, and (b) use some type of voting/polling process to combine their outputs, based on the observation that a simple combination of simple algorithms often radically outperforms a single sophisticated algorithm in these kinds of problems.

Also, look into Earth Movers Distance (on color histogram, gradients, ...) for simple feature extraction (on all frames or on selected frames only). This can be conveniently done with a couple of lines of code in python/numpy/scipy/pyopencv.

The following three are the probably the most quoted papers in this field, all by different research groups:

  1. Yang, J., Y. G. Jiang, A. G. Hauptmann, and C. W. Ngo. “Evaluating Bag-of-visual-words Representations in Scene Classification.” In Proceedings of the International Workshop on Workshop on Multimedia Information Retrieval, 197–206, 2007. http://dl.acm.org/citation.cfm?id=1290111.

  2. Shen, H. T., X. Zhou, Z. Huang, J. Shao, and X. Zhou. “UQLIPS: a Real-time Near-duplicate Video Clip Detection System.” In Proceedings of the 33rd International Conference on Very Large Data Bases, 1374–1377, 2007. http://dl.acm.org/citation.cfm?id=1326018.

  3. Wu, X., A. G. Hauptmann, and C. W. Ngo. “Practical Elimination of Near-duplicates from Web Video Search.” In Proceedings of the 15th International Conference on Multimedia, 218–227, 2007. http://dl.acm.org/citation.cfm?id=1291280.

Yang et al is the same as the method used in SOTU.

like image 181
Alex I Avatar answered Sep 20 '22 23:09

Alex I