Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Converting YUV->RGB(Image processing)->YUV during onPreviewFrame in android?

I am capturing image using SurfaceView and getting Yuv Raw preview data in public void onPreviewFrame4(byte[] data, Camera camera)

I have to perform some image preprocessing in onPreviewFrame so i need to convert Yuv preview data to RGB data than image preprocessing and back to Yuv data.

I have used both function for encoding and decoding Yuv data to RGB as following :

public void onPreviewFrame(byte[] data, Camera camera) {     Point cameraResolution = configManager.getCameraResolution();     if (data != null) {         Log.i("DEBUG", "data Not Null");                  // Preprocessing                 Log.i("DEBUG", "Try For Image Processing");                 Camera.Parameters mParameters = camera.getParameters();                 Size mSize = mParameters.getPreviewSize();                 int mWidth = mSize.width;                 int mHeight = mSize.height;                 int[] mIntArray = new int[mWidth * mHeight];                  // Decode Yuv data to integer array                 decodeYUV420SP(mIntArray, data, mWidth, mHeight);                  // Converting int mIntArray to Bitmap and                  // than image preprocessing                  // and back to mIntArray.                  // Encode intArray to Yuv data                 encodeYUV420SP(data, mIntArray, mWidth, mHeight);                     } }      static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,         int height) {     final int frameSize = width * height;      for (int j = 0, yp = 0; j < height; j++) {         int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;         for (int i = 0; i < width; i++, yp++) {             int y = (0xff & ((int) yuv420sp[yp])) - 16;             if (y < 0)                 y = 0;             if ((i & 1) == 0) {                 v = (0xff & yuv420sp[uvp++]) - 128;                 u = (0xff & yuv420sp[uvp++]) - 128;             }              int y1192 = 1192 * y;             int r = (y1192 + 1634 * v);             int g = (y1192 - 833 * v - 400 * u);             int b = (y1192 + 2066 * u);              if (r < 0)                 r = 0;             else if (r > 262143)                 r = 262143;             if (g < 0)                 g = 0;             else if (g > 262143)                 g = 262143;             if (b < 0)                 b = 0;             else if (b > 262143)                 b = 262143;              // rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &             // 0xff00) | ((b >> 10) & 0xff);             // rgba, divide 2^10 ( >> 10)             rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)                     | ((b >> 2) | 0xff00);         }     } }       static public void encodeYUV420SP_original(byte[] yuv420sp, int[] rgba,         int width, int height) {     final int frameSize = width * height;      int[] U, V;     U = new int[frameSize];     V = new int[frameSize];      final int uvwidth = width / 2;      int r, g, b, y, u, v;     for (int j = 0; j < height; j++) {         int index = width * j;         for (int i = 0; i < width; i++) {             r = (rgba[index] & 0xff000000) >> 24;             g = (rgba[index] & 0xff0000) >> 16;             b = (rgba[index] & 0xff00) >> 8;              // rgb to yuv             y = (66 * r + 129 * g + 25 * b + 128) >> 8 + 16;             u = (-38 * r - 74 * g + 112 * b + 128) >> 8 + 128;             v = (112 * r - 94 * g - 18 * b + 128) >> 8 + 128;              // clip y             yuv420sp[index++] = (byte) ((y < 0) ? 0 : ((y > 255) ? 255 : y));             U[index] = u;             V[index++] = v;         }     } 

The problem is that encoding and decoding Yuv data might have some mistake because if i skip the preprocessing step than also encoded Yuv data are differ from original data of PreviewCallback.

Please help me to resolve this issue. I have to used this code in OCR scanning so i need to implement this type of logic.

If any other way of doing same thing than please provide me.

Thanks in advance. :)

like image 320
Hitesh Patel Avatar asked Feb 17 '12 09:02

Hitesh Patel


People also ask

How do you convert YUV to RGB?

The function [R, G, B] = Y′UV444toRGB888(Y′, U, V) converts Y′UV format to simple RGB format.

What is YUV in Android?

ImageFormat#YUV_420_888 is one of the most common image format supported by Android Cameras. It's a multi-plane YUV (YCbCr) format represented by three separate planes in android. media. Image and the order of the planes is guaranteed to be: [0]: Y plane (Luma)

Is YUV to RGB lossless?

We all know converting between RGB and YUV is lossy, but even upsampling the chroma and then downsampling again is not necessarily lossless.


2 Answers

Although the documentation suggests that you can set which format the image data should arrive from the camera in, in practice you often have a choice of one: NV21, a YUV format. For lots of information on this format see http://www.fourcc.org/yuv.php#NV21 and for information on the theory behind converting it to RGB see http://www.fourcc.org/fccyvrgb.php. There is a picture based explanation at Extract black and white image from android camera's NV21 format. There is an android specific section on a wikipedia page about the subject (thanks @AlexCohn): YUV#Y'UV420sp (NV21) to RGB conversion (Android).

However, once you've set up your onPreviewFrame routine, the mechanics of going from the byte array it sends you to useful data is somewhat, ummmm, unclear. From API 8 onwards, the following solution is available, to get to a ByteStream holiding a JPEG of the image (compressToJpeg is the only conversion option offered by YuvImage):

// pWidth and pHeight define the size of the preview Frame ByteArrayOutputStream out = new ByteArrayOutputStream();  // Alter the second parameter of this to the actual format you are receiving YuvImage yuv = new YuvImage(data, ImageFormat.NV21, pWidth, pHeight, null);  // bWidth and bHeight define the size of the bitmap you wish the fill with the preview image yuv.compressToJpeg(new Rect(0, 0, bWidth, bHeight), 50, out); 

This JPEG may then need to be converted into the format you want. If you want a Bitmap:

byte[] bytes = out.toByteArray(); Bitmap bitmap= BitmapFactory.decodeByteArray(bytes, 0, bytes.length); 

If, for whatever reason, you are unable to do this, you can do the conversion manually. Some problems to be overcome in doing this:

  1. The data arrives in a byte array. By definition, bytes are signed numbers, meaning that they go from -128 to 127. However, the data is actually unsigned bytes (0 to 255). If this isn't dealt with, the outcome is doomed to have some odd clipping effects.

  2. The data is in a very specific order (as per the previously mentioned web pages) and each pixel needs to be extracted carefully.

  3. Each pixel needs to be put into the right place on a bitmap, say. This also requires a rather messy (in my view) approach of building a buffer of the data and then filling a bitmap from it.

  4. In principle, the values should be stored [16..240], but it appears that they are stored [0..255] in the data sent to onPreviewFrame

  5. Just about every web page on the matter proposes different coefficients, even allowing for [16..240] vs [0..255] options.

  6. If you've actually got NV12 (another variant on YUV420), then you will need to swap the reads for U and V.

I present a solution (which seems to work), with requests for corrections, improvements and ways of making the whole thing less costly to run. I have set it out to hopefully make clear what is happening, rather than to optimise it for speed. It creates a bitmap the size of the preview image:

The data variable is coming from the call to onPreviewFrame

// Define whether expecting [16..240] or [0..255] boolean dataIs16To240 = false;  // the bitmap we want to fill with the image Bitmap bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ARGB_8888); int numPixels = imageWidth*imageHeight;  // the buffer we fill up which we then fill the bitmap with IntBuffer intBuffer = IntBuffer.allocate(imageWidth*imageHeight); // If you're reusing a buffer, next line imperative to refill from the start, // if not good practice intBuffer.position(0);  // Set the alpha for the image: 0 is transparent, 255 fully opaque final byte alpha = (byte) 255;  // Holding variables for the loop calculation int R = 0; int G = 0; int B = 0;  // Get each pixel, one at a time for (int y = 0; y < imageHeight; y++) {     for (int x = 0; x < imageWidth; x++) {         // Get the Y value, stored in the first block of data         // The logical "AND 0xff" is needed to deal with the signed issue         float Y = (float) (data[y*imageWidth + x] & 0xff);          // Get U and V values, stored after Y values, one per 2x2 block         // of pixels, interleaved. Prepare them as floats with correct range         // ready for calculation later.         int xby2 = x/2;         int yby2 = y/2;          // make this V for NV12/420SP         float U = (float)(data[numPixels + 2*xby2 + yby2*imageWidth] & 0xff) - 128.0f;          // make this U for NV12/420SP         float V = (float)(data[numPixels + 2*xby2 + 1 + yby2*imageWidth] & 0xff) - 128.0f;          if (dataIs16To240) {             // Correct Y to allow for the fact that it is [16..235] and not [0..255]             Y = 1.164*(Y - 16.0);              // Do the YUV -> RGB conversion             // These seem to work, but other variations are quoted             // out there.             R = (int)(Yf + 1.596f*V);             G = (int)(Yf - 0.813f*V - 0.391f*U);             B = (int)(Yf            + 2.018f*U);         }         else {             // No need to correct Y             // These are the coefficients proposed by @AlexCohn             // for [0..255], as per the wikipedia page referenced             // above             R = (int)(Yf + 1.370705f*V);             G = (int)(Yf - 0.698001f*V - 0.337633f*U);             B = (int)(Yf               + 1.732446f*U);         }                        // Clip rgb values to 0-255         R = R < 0 ? 0 : R > 255 ? 255 : R;         G = G < 0 ? 0 : G > 255 ? 255 : G;         B = B < 0 ? 0 : B > 255 ? 255 : B;          // Put that pixel in the buffer         intBuffer.put(alpha*16777216 + R*65536 + G*256 + B);     } }  // Get buffer ready to be read intBuffer.flip();  // Push the pixel information from the buffer onto the bitmap. bitmap.copyPixelsFromBuffer(intBuffer); 

As @Timmmm points out below, you could do the conversion in int by multiplying the scaling factors by 1000 (ie. 1.164 becomes 1164) and then dividng the end results by 1000.

like image 131
Neil Townsend Avatar answered Oct 13 '22 01:10

Neil Townsend


Why not specify that camera preview should provide RGB images?

i.e. Camera.Parameters.setPreviewFormat(ImageFormat.RGB_565);

like image 29
Reuben Scratton Avatar answered Oct 13 '22 01:10

Reuben Scratton