I'm coding a video codec using the JPEG compression technic for each frame. Until here I've already coded YUV, DCT and quantized DCT (encoding and decoding). I already have coded YUV422 encoding too but I don't understand how to do the inverse (decoding).
To compute my YUV for each pixel I used the following equations :
Encoding :
Y = 0.299 * R + 0.587 * G + 0.114 * B
U = -0.1687 * R - 0.4187 * G + 0.5 * B + 128
V = 0.5 * R - 0.4187 * G - 0.0813 * B + 128
Decoding :
R = Y + 1.402 * (V - 128)
G = Y - 0.34414 * (U - 128) - 0.71414 * (V - 128)
B = Y + 1.772 * (U - 128)
This equations do the perfect job.
Now to do the sub-sampling encoding, I take my image encoded in YUV and I compute the sum of 2 adjacent pixels and I divide the result by 2. The result is set for the 2 pixels.
Ex :
For sake of simplicity I will take a pixel value between 0 and 255 (not using RGB components).
Just below : 2 examples with the same result.
Pixel_1 = 15, Pixel_2 = 5 -> (Pixel_1 + Pixel_2) / 2 = 10
Pixel_3 = 10, Pixel_4 = 10 -> (Pixel_3 + Pixel_4) / 2 = 10
If I apply this equation for all the pixels of my YUV image, I get a new image but this time encoded in YUV422 sub-sampling.
So I wonder how can I get back the YUV image from the YUV422 image. My example just above demonstrate that it's impossible to get back the original YUV image because there are a lot of combinations which lead to the same result (10 here). However, I think there is a way to have, give or take a few, the same original YUV pixel values. Does anyone can help me, please ? I'm really lost. Thanks a lot in advance for your help.
This is how the pixels are placed for 4:2:0 and 4:2:2 (normally)
This is the correct way to interpolate chroma between 4:2:2 and 4:2:0 (luma is already at correct resolution)
Code can be downloaded from http://www.mpeg.org/MPEG/video/mssg-free-mpeg-software.html
Code below from file readpic.c
/* vertical filter and 2:1 subsampling */
static void conv422to420(src,dst)
unsigned char *src, *dst;
{
int w, i, j, jm6, jm5, jm4, jm3, jm2, jm1;
int jp1, jp2, jp3, jp4, jp5, jp6;
w = width>>1;
if (prog_frame)
{
/* intra frame */
for (i=0; i<w; i++)
{
for (j=0; j<height; j+=2)
{
jm5 = (j<5) ? 0 : j-5;
jm4 = (j<4) ? 0 : j-4;
jm3 = (j<3) ? 0 : j-3;
jm2 = (j<2) ? 0 : j-2;
jm1 = (j<1) ? 0 : j-1;
jp1 = (j<height-1) ? j+1 : height-1;
jp2 = (j<height-2) ? j+2 : height-1;
jp3 = (j<height-3) ? j+3 : height-1;
jp4 = (j<height-4) ? j+4 : height-1;
jp5 = (j<height-5) ? j+5 : height-1;
jp6 = (j<height-5) ? j+6 : height-1;
/* FIR filter with 0.5 sample interval phase shift */
dst[w*(j>>1)] = clp[(int)(228*(src[w*j]+src[w*jp1])
+70*(src[w*jm1]+src[w*jp2])
-37*(src[w*jm2]+src[w*jp3])
-21*(src[w*jm3]+src[w*jp4])
+11*(src[w*jm4]+src[w*jp5])
+ 5*(src[w*jm5]+src[w*jp6])+256)>>9];
}
src++;
dst++;
}
}
}
Hope it helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With