I plan to develop a tool for realtime video manipulation using C++, Qt and OpenGL. Video overlay isn't an option since shaders should be used for frame processing. At the moment I imagine a following sequence of steps:
I'm looking for some general advice explaning what extentions or technics can be used here. Is there a good reason to use Direct3D instead?
First thing, on the PC there's no explicit way to use DMA. The driver might use it, or might use something else.
In any case, step 3 will be "change texture data on the graphics card". In OpenGL that's PBO (Pixel Buffer Object) extension or good old glTexSubImage* function. In D3D9 it's LockRect on the texture or other ways (e.g. LockRect on a scratch texture, then blit into a GPU texture). Any of those would potentially use DMA, but you can't be sure.
Then the data is in a texture. You can render it to a screen with some shaders (e.g. doing YCbCr conversion), or render into other texture(s) to do more complex processing effects (e.g. blur/glow/...).
Using Direct3D is easier in a sense that there are clearly defined "wast ways" of doing stuff. In OpenGL there are a lot more options to do anything, and you have to somehow figure out which ones are fast (sometimes the fast paths are different on different platforms or hardware).
If you're linux, NVIDIA's recent drivers in the 180.xx series have added support for video decoding via the VDPAU api (Video Decoding and Presentation something). Many major projects have integrated with the api including mplayer, vlc, ffmpeg, and mythtv. I don't know all of the specifics, but they provide api for many codecs including common sub-operations and bitstream manipulation.
I'd look here before going straight to CUDA (which I assume VDPAU may use)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With