I am working on a python project that uses ffmpeg
as part of its core functionality. Essentially the functionality from ffmpeg
that I use boils down to these two commands:
ffmpeg -i udp://<address:port> -qscale:v 2 -vf "fps=30" sttest%04d.jpg
ffmpeg -i udp://<address:port> -map data-re -codec copy -f data out.bin
Pretty simple stuff.
I am trying to create a self-contained program (which uses the above ffmpeg
functionality) that can easily be installed on any particular system without relying on that system having the necessary dependencies, as hopefully I would package those dependencies with the program itself.
With that in mind, would it be best to use the libav*
libraries to perform this functionality from within the program? Or would a wrapper (ffmpy
) for the ffmpeg
command line tool be a better option? My current thinking on the drawbacks of each is that using the libraries may be the best practice, but it seems overly complex to have to learn how to use them (and potentially learn C, which I've never learned, in the process) just to do those two basic things I mentioned above. The libraries overall are a bit of a bit of a black box to me and don't have very much documentation. But the problem with using a wrapper for ffmpeg
would be that it essentially relies on calling a subprocess, which seems somewhat sloppy. Although I'm not sure why I feel so viscerally opposed to subprocesses.
FFmpeg has been used in the core processing for video platforms like YouTube and iTunes. Most of us used a media player like VLC to play video files. VLC uses FFmpeg libraries as its core. Some video editors and mobile applications also use FFmpeg under the hood.
ffmpeg is a command-line tool that converts audio or video formats. It can also capture and encode in real-time from various hardware and software sources such as a TV capture card.
Avconv is a command-line tool for transcoding multimedia files. It stands on its own, but it's part of the bigger Libav project, a set of free & open-source libraries for dealing with multimedia formats of all sorts. The Libav project was forked from the FFmpeg codebase in 2011.
It's somewhat of a matter of opinion, but I would suggest using the ffmpeg
CLI in a subprocess as long as you're doing something that it supports well, only using the libav
* libraries if you have some requirement that the CLI can't really satisfy.
Although you can get more flexibility with the libraries, the API is very intricate, and you would probably spend most of your time duplicating what the CLI already does (ffmpeg.c, just the main program gluing the libraries together, is around 4800 lines, not including its 3700 line option parser). And it's likely you would add a few bugs along the way — especially if you're lacking C knowledge. So if you can get the CLI to do what you need, that's undoubtedly the path of least resistance. There's no shame in subprocesses: that's the Unix way!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With