I have three videos:
I want to create a final video with those three components taking up a certain region of the screen.
Is open-source software that would allow me to do this (mencoder, ffmpeg, virtualdub..)? Which do you recommend?
Or is there a C/C++ API that would enable me to create something like that programmatically?
Edit
There will be multiple recorded lectures in the future. This means that I need a generic/automated solution.
I'm currently checking out if I could write an application with GStreamer to do this job. Any comments on that?
Solved!
I succeeded in doing this with GStreamer's videomixer element. I use the gst-launch syntax to create a pipeline and then load it with gst_parse_launch. It's a really productive way to implement complex pipelines.
Here's a pipeline that takes two incoming video streams and a logo image, blends them into one stream and the duplicates it so that it simultaneously displayed and saved to disk.
desktop. ! queue
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=640,height=480
! videobox right=-320
! ffmpegcolorspace
! vmix.sink_0
webcam. ! queue
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=320,height=240
! vmix.sink_1
logo. ! queue
! jpegdec
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=320,height=240
! vmix.sink_2
vmix. ! t.
t. ! queue
! ffmpegcolorspace
! ffenc_mpeg2video
! filesink location="recording.mpg"
t. ! queue
! ffmpegcolorspace
! dshowvideosink
videotestsrc name="desktop"
videotestsrc name="webcam"
multifilesrc name="logo" location="logo.jpg"
videomixer name=vmix
sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=0
sink_1::xpos=640 sink_1::ypos=0 sink_1::zorder=1
sink_2::xpos=640 sink_2::ypos=240 sink_2::zorder=2
tee name="t"
It can be done with ffmpeg; I've done it myself. That said, it is technically complex. That said, again, it is what any other software you might use is going to do in its core essence.
The process works like this:
I think what surprises folks is that you can literally concatenate two raw PCM wav audio files, and the result is valid. What really, really surprises people is that you can do the same with MPEG1/h.261 video.
Like I've said, I've done it. There are some specifics left out, but it most definately works. My program was done in a bash script with ffmpeg. While I've never used the ffmpeg C API, I don't see why you could not use it to do the same thing.
It's a highly educational project to do, if you are inclined. If your goal is just to slap some videos together for a one off project, then maybe using a GUI tool is a better idea.
If you just want to combine footage into a single video and crop the video, I'd use virtual dub.
you can combine multiple video files/streams into one picture with VLC:
there is a command-line interface so you can script/automate it.
http://wiki.videolan.org/Mosaic
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With