Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I use Hardware accelerated video/H.264 decoding with directx 11 and windows 7?

I've been researching all day and not gotten very far. I'm on windows 7, using directx 11. (My final output is to be a frame of video onto a DX11 texture) I want to decode some very large H.264 video files, and the CPU (using libav) doesn't cut it.

I've looked at the hwaccel capabilities of libav using DXVA2, but hit a road block when I need to create a IDirectXVideoDecoder, which can only be created with a D3D9 interface. (which I don't have using DX11)

Whenever I've looked up DXVA documentation, it doesn't reference DX11, was this removed in DX10 or 11? (Can't find any confirmation of this, nor anywhere that says DXVA2 is redundant, possibly that it's been superceeded by DXVA-HD?)

Then I've looked into the media foundation SDK as that looks like what I'm supposed to use for DX11... But none of the types exist in my headers (The docs say to just include <d3d11.h>, but this yields nothing). They also specify a minimum of windows 8 to use it.

I believe to use MF I need the windows 8 SDK, which now includes all the directx libs/headers.

So this leaves a gap with windows 7... Is it possible to get hardware accelerated video decoding? and if so, which API am I supposed to be using?

Edit: As another follow up, my MediaFoundation (and AVF, android, magic leap, etc etc) implementation is in my open source project https://github.com/NewChromantics/PopH264 Edit2: But I don't know if it supports win7 :)

like image 829
Soylent Graham Avatar asked Nov 07 '13 21:11

Soylent Graham


People also ask

What does H 264 hardware acceleration do?

If you use the [Encoding mode] of Bandicut, you can use the hardware-accelerated H. 264 (NVIDIA, Intel, AMD) encoders which allow you to cut, trim, split, join and convert videos at a higher speed than the H.

What is hardware-accelerated video decode?

Hardware acceleration invokes a specialized processor to speed up common, complex tasks. One of the most common use cases for hardware acceleration is video encoding and decoding. Graphics cards or other hardware often contain dedicated video encode/decode blocks that can decode and encode videos much more efficiently.

What is DXVA video DECODER?

DirectX Video Acceleration (DXVA) is a Microsoft API specification for the Microsoft Windows and Xbox 360 platforms that allows video decoding to be hardware-accelerated. The pipeline allows certain CPU-intensive operations such as iDCT, motion compensation and deinterlacing to be offloaded to the GPU.


1 Answers

D3D11 features a video api which is basically DXVA2 with a slightly altered interface above. You need a good understand of h.264 bitstreams to proceed (really!). i.e. make sure you have a h.264 parser at hand to extract fields of the SPS and PPS structures and all slices of an encoded frame.

1) Obtain ID3D11VideoDevice instance from your ID3D11Device, and ID3D11VideoContext from your immediate D3D11 device context NOTE: On Win7, you have to create your device with feature level 9_3 to get video support! (In Win8 it just works)

2) Create a ID3D11VideoDecoder instance for h.264 Use ID3D11VideoDevice::GetVideoDecoderProfileCount, GetVideoDecoderProfile, CheckVideoDecodeRFormat... to iterate through all supported profiles, and find one with GUID D3D11_DECODER_PROFILE_H264_VLD_NOFGT for h264 without filmgrain. As OutputFormat your best bet is DXGI_FORMAT_NV12.

3) Decoding of the individual frames see Supporting Direct3D 11 Video Decoding in Media Foundation:

  • ID3D11VideoContext::DecoderBeginFrame( decoder, outputView -> decoded frame texture )
  • Fill buffers:
    • D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS
    • D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX
    • D3D11_VIDEO_DECODER_BUFFER_BITSTREAM
    • D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL

The buffers are filled with the corresponding DXVA2 structures (see dxva2.h) The full DXVA2 spec is here, you'll need it to map the h.264 sps/pps fields accordingly.

See:

  • About DXVA 2.0
  • https://software.intel.com/sites/default/files/m/b/4/7/DXVA_H264.pdf

Then:

  • ID3D11VideoContext::SubmitBuffers to commit all filled buffers
  • ID3D11VideoContext::DecoderEndFrame to finish the current frame

3) D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS buffer also contains info on all references frames/surface - you need to manage them yourself, i.e. make sure the surfaces/textures are available to the GPU!

It's quite complicated, check ffmpeg and Media Player Classic, they both have DXVA2 (though not via DX11) support.

4) Convert from NV12 to RGB(A), some GPUs (D3D11 feature levels) allow to use NV12 as shader input, some don't. In case it's not possible to use NV12 directly, have a look at the D3D11VideoProcessor interfaces which feature NV12/YUV420->RGB conversion for all GPUs with D3D11 support.

The conversion could be performed in code like this:

// Setup ID3D11Video*
ID3D11VideoProcessor * d3dVideoProc = ...;
ID3D11VideoDevice    * d3dVideoDevice = ...;
ID3D11VideoProcessorEnumerator * d3dVideoProcEnum = ...;


ID3D11Texture2D * srcTextureNV12Fmt = ...;
ID3D11Texture2D * dstTextureRGBFmt = ...;

// Use Video Processor

// Create views for VideoProc In/Output
ID3D11VideoProcessorInputView * videoProcInputView;
ID3D11VideoProcessorOutputView * videoProcOutputView;

{

    D3D11_VIDEO_PROCESSOR_INPUT_VIEW_DESC inputViewDesc = { 0 };
    inputViewDesc.ViewDimension = D3D11_VPIV_DIMENSION_TEXTURE2D;
    inputViewDesc.Texture2D.ArraySlice = arraySliceIdx;
    inputViewDesc.Texture2D.MipSlice = 0;
    hr = d3dVideoDevice->CreateVideoProcessorInputView(srcTextureNV12Fmt, d3dVideoProcEnum, &inputViewDesc, &videoProcInputView);
}


{
    D3D11_VIDEO_PROCESSOR_OUTPUT_VIEW_DESC outputViewDesc = { D3D11_VPOV_DIMENSION_TEXTURE2D };
    outputViewDesc.Texture2D.MipSlice = 0;
    hr = d3dVideoDevice->CreateVideoProcessorOutputView(dstTextureRGBFmt, d3dVideoProcEnum, &outputViewDesc, &videoProcOutputView);
}


// Setup streams
D3D11_VIDEO_PROCESSOR_STREAM streams = { 0 };
streams.Enable = TRUE;
streams.pInputSurface = videoProcInputView.get();

RECT srcRect = { /* source rectangle in pixels*/ };
RECT dstRect = { /* destination rectangle in pixels*/ };

// Perform VideoProc Blit Operation (with color conversion)
hr = videoCtx_->VideoProcessorBlt(d3dVideoProc, videoProcOutputView.get(), 0, 1, &streams);
like image 62
youaresoomean Avatar answered Nov 26 '22 05:11

youaresoomean