In my application, we need to display the Video frame receives from server to our android application,
Server is sending video data @ 50 frame per second, having encoded in WebM i.e. using libvpx to encode and decode the images,
Now after decoding from libvpx its getting YUV data, that we can displayed over the image layout,
the current implementation is something like this,
In JNI / Native C++ code, we are converting YUV data to RGB Data In Android framework, calling
public Bitmap createImgae(byte[] bits, int width, int height, int scan) { Bitmap bitmap=null; System.out.println("video: creating bitmap"); //try{ bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); bitmap.copyPixelsFromBuffer(ByteBuffer.wrap(bits)); //}catch(OutOfMemoryError ex){ //} System.out.println("video: bitmap created"); return bitmap; }
To create the bitmap image ,
to display the image over imageView using following code,
img = createImgae(imgRaw, imgInfo[0], imgInfo[1], 1); if(img!=null && !img.isRecycled()){ iv.setImageBitmap(img); //img.recycle(); img=null; System.out.println("video: image displayed"); }
My query is, overall this function is taking approx 40 ms, is there any way to optimize it,
1 -- Is there any way to display YUV data to imageView ?
2 -- Is there any other way to create Image( Bitmap image) from RGB data ,
3 -- I believe i am always creating image, but i suppose i should create bitmap only once and do / supply new buffer always, as and when we received.
please share your views.
It's a multi-plane YUV (YCbCr) format represented by three separate planes in android. media. Image. This format can be used for processing the input frames before saving to disk or some other action. A very common question around YUV is how to consume it in Android.
ImageFormat#YUV_420_888 is one of the most common image format supported by Android Cameras. It's a multi-plane YUV (YCbCr) format represented by three separate planes in android. media. Image and the order of the planes is guaranteed to be: [0]: Y plane (Luma)
NV21 is the default image format used by Android camera. Assume you want to save the data and view it as a BMP file on PC, how to write code in Java without Android image APIs?
The YUV422 data format shares U and V values between two pixels. As a result, these values are transmitted to the PC image buffer only once for every two pixels, resulting in an average transmission rate of 16 bits per pixel.
Following code solve your problem and it may take less time on Yuv Format data because YuvImage class is provided with Android-SDK.
You can try this,
ByteArrayOutputStream out = new ByteArrayOutputStream(); YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null); yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out); byte[] imageBytes = out.toByteArray(); Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length); iv.setImageBitmap(image);
or
void yourFunction(byte[] data, int mWidth, int mHeight) { int[] mIntArray = new int[mWidth*mHeight]; // Decode Yuv data to integer array decodeYUV420SP(mIntArray, data, mWidth, mHeight); //Initialize the bitmap, with the replaced color Bitmap bmp = Bitmap.createBitmap(mIntArray, mWidth, mHeight, Bitmap.Config.ARGB_8888); // Draw the bitmap with the replaced color iv.setImageBitmap(bmp); } static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width, int height) { final int frameSize = width * height; for (int j = 0, yp = 0; j < height; j++) { int uvp = frameSize + (j >> 1) * width, u = 0, v = 0; for (int i = 0; i < width; i++, yp++) { int y = (0xff & ((int) yuv420sp[yp])) - 16; if (y < 0) y = 0; if ((i & 1) == 0) { v = (0xff & yuv420sp[uvp++]) - 128; u = (0xff & yuv420sp[uvp++]) - 128; } int y1192 = 1192 * y; int r = (y1192 + 1634 * v); int g = (y1192 - 833 * v - 400 * u); int b = (y1192 + 2066 * u); if (r < 0) r = 0; else if (r > 262143) r = 262143; if (g < 0) g = 0; else if (g > 262143) g = 262143; if (b < 0) b = 0; else if (b > 262143) b = 262143; // rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & // 0xff00) | ((b >> 10) & 0xff); // rgba, divide 2^10 ( >> 10) rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000) | ((b >> 2) | 0xff00); } } }
Create a bitmap after getting Width and height in onCreate.
editedBitmap = Bitmap.createBitmap(widthPreview, heightPreview, android.graphics.Bitmap.Config.ARGB_8888);
And in onPreviewFrame.
int[] rgbData = decodeGreyscale(aNv21Byte,widthPreview,heightPreview); editedBitmap.setPixels(rgbData, 0, widthPreview, 0, 0, widthPreview, heightPreview);
And
private int[] decodeGreyscale(byte[] nv21, int width, int height) { int pixelCount = width * height; int[] out = new int[pixelCount]; for (int i = 0; i < pixelCount; ++i) { int luminance = nv21[i] & 0xFF; // out[i] = Color.argb(0xFF, luminance, luminance, luminance); out[i] = 0xff000000 | luminance <<16 | luminance <<8 | luminance;//No need to create Color object for each. } return out; }
And Bonus.
if(cameraId==CameraInfo.CAMERA_FACING_FRONT) { matrix.setRotate(270F); } finalBitmap = Bitmap.createBitmap(editedBitmap, 0, 0, widthPreview, heightPreview, matrix, true);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With