I have been using FFmpeg to find the middle frame of a h264 video file, and extract the jpg thumbnail for use on a streaming portal. This is done automatically for each uploaded video.
Sometimes the frame happens to be a black frame or just semantically bad i.e. a background or blurry shot which doesn't relate well to the video content.
I wonder if I can use openCV or some other method/library to programmatically find better thumbnails through facial recognition or frame analysis.
Add automatic or custom thumbnails Select the video you want to edit. Edit thumbnail . Select an auto-generated thumbnail or tap Custom thumbnail to create a custom video thumbnail from an image on your device. Confirm your thumbnail selection and tap SELECT.
A video thumbnail is a still image that acts as the preview image for your video. It's kind of like a book cover. And, like a book cover, it should entice a potential viewer to want to see more. The term “thumbnail” originated with still images.
Thumbnails are usually generated automatically by search engines, image editing programs, as well as image management programs. The smaller file size of thumbnails is especially useful for mobile browsing.
I've run into that problem myself and came up with a very crude-yet-simple algorithm to ensure my thumbnails were more "interesting". How?
Why does this work? Because jpeg files of a monotone 'boring' image, like an all black screen, compress into a much smaller files than an image with many objects and colors in it.
It's not perfect, but is a viable 80/20 solution. (Solves 80% of the problem with 20% of the work.) Coding something that actually analyzes the image itself is going to be considerably more work.
Libavfilter has got a thumbnail filter, which is meant to pick the most representative frame from a series of frames. Not sure how it works, but heres the docs http://ffmpeg.org/libavfilter.html#thumbnail
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With