I'm new to HLSL. I am trying to convert color space of an image captured using DXGI Desktop Duplication API from BGRA to YUV444 using texture as render target.
I have set my pixel shader to perform the required transformation. And taking the 4:2:0 sub-sampled YUV from render target texture and encoding it as H264 using ffmpeg, I can see the image.
The problem is - it is greenish.
The input color information for the shader is of float data type but the coefficient matrix available for RGB to YUV conversion assumes integer color information.
If I use clamp function and take the integers out of input color, I'm losing the accuracy.
Any suggestions and directions are welcome. Please let me know if any other information helps.
I suspect the Pixel shader I wrote, As I am working with it for the first time. Here is the pixel shader.
float3 rgb_to_yuv(float3 RGB)
{
float y = dot(RGB, float3(0.29900f, -0.16874f, 0.50000f));
float u = dot(RGB, float3(0.58700f, -0.33126f, -0.41869f));
float v = dot(RGB, float3(0.11400f, 0.50000f, -0.08131f));
return float3(y, u, v);
}
float4 PS(PS_INPUT input) : SV_Target
{
float4 rgba, yuva;
rgba = tx.Sample(samLinear, input.Tex);
float3 ctr = float3(0, 0, .5f);
return float4(rgb_to_yuv(rgba.rgb) + ctr, rgba.a);
}
The render target is mapped to CPU readable texture and copying the YUV444 data into 3 BYTE arrays and supplying to ffmpeg libx264 encoder.
The encoder writes the encoded packets to a video file.
Here I'm taking for each 2X2 matrix of pixels one U(Cb) and one V(Cr) and 4 Y values.
I retrieve the yuv420 data from texture as :
for (size_t h = 0, uvH = 0; h < desc.Height; ++h)
{
for (size_t w = 0, uvW = 0; w < desc.Width; ++w)
{
dist = resource1.RowPitch *h + w * 4;
distance = resource.RowPitch *h + w * 4;
distance2 = inframe->linesize[0] * h + w;
data = sptr[distance + 2 ];
pY[distance2] = data;
if (w % 2 == 0 && h % 2 == 0)
{
data1 = sptr[distance + 1];
distance2 = inframe->linesize[1] * uvH + uvW++;
pU[distance2] = data1;
data1 = sptr[distance ];
pV[distance2] = data1;
}
}
if (h % 2)
uvH++;
}
EDIT1: Adding the Blend state desc :
D3D11_BLEND_DESC BlendStateDesc;
BlendStateDesc.AlphaToCoverageEnable = FALSE;
BlendStateDesc.IndependentBlendEnable = FALSE;
BlendStateDesc.RenderTarget[0].BlendEnable = TRUE;
BlendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
BlendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
BlendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
BlendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
hr = m_Device->CreateBlendState(&BlendStateDesc, &m_BlendState);
FLOAT blendFactor[4] = {0.f, 0.f, 0.f, 0.f};
m_DeviceContext->OMSetBlendState(nullptr, blendFactor, 0xffffffff);
m_DeviceContext->OMSetRenderTargets(1, &m_RTV, nullptr);
m_DeviceContext->VSSetShader(m_VertexShader, nullptr, 0);
m_DeviceContext->PSSetShader(m_PixelShader, nullptr, 0);
m_DeviceContext->PSSetShaderResources(0, 1, &ShaderResource);
m_DeviceContext->PSSetSamplers(0, 1, &m_SamplerLinear);
m_DeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
EDIT2 : The value of Y U V when calculated on CPU:45 200 170 and values after pixel shader which involves floating point calculations:86 141 104. The corresponding R G B:48 45 45. What could be making the difference?
It looks like your matrix is transposed.
According to: www.martinreddy.net/gfx/faqs/colorconv.faq under [6.4] ITU.BT-601 Y'CbCr:
Y'= 0.299*R' + 0.587*G' + 0.114*B'
Cb=-0.169*R' - 0.331*G' + 0.500*B'
Cr= 0.500*R' - 0.419*G' - 0.081*B'
You misinterpreted the behavior of numpy.dot in the source you copied.
Also, it looks like @harold is correct, you should be offsetting both U and V.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With