I’m trying to convert YUV_420_888 images into bitmaps, coming from the camera2 preview. But the output image has incorrect colors.
Next is the test code I’m running to generate the bitmap. Is test code only, so please don’t do any code review about not relevant factors such as the bitmap is being recycled, or the RenderScript is continuously been created. This code is just to test the conversion from YUV to RGB and nothing more.
Other factors, the code is meant to run from API 22 and above, therefore using RenderScript specific ScriptIntrinsicYuvToRGB should be sufficient, without having to use old manual conversions which where necessary only in previous Android versions due to lack of proper YUV_420_888 support.
As RenderScript already offers a dedicated ScriptIntrinsicYuvToRGB which is meant to handle all types of YUV conversions, I think the problem could be in how I get the YUV byte data from the Image object, but I can't figure where the issue is.
To view the output bitmap in Android Studio, place a breakpoint in bitmap.recycle(), so before it gets recycled you can look at it in the Variables Debug Window by using the “view bitmap” option.
Please let me know if anyone can spot what’s wrong with the conversion:
@Override
public void onImageAvailable(ImageReader reader)
{
RenderScript rs = RenderScript.create(this.mContext);
final Image image = reader.acquireLatestImage();
final Image.Plane[] planes = image.getPlanes();
final ByteBuffer planeY = planes[0].getBuffer();
final ByteBuffer planeU = planes[1].getBuffer();
final ByteBuffer planeV = planes[2].getBuffer();
// Get the YUV planes data
final int Yb = planeY.rewind().remaining();
final int Ub = planeU.rewind().remaining();
final int Vb = planeV.rewind().remaining();
final ByteBuffer yuvData = ByteBuffer.allocateDirect(Yb + Ub + Vb);
planeY.get(yuvData.array(), 0, Yb);
planeU.get(yuvData.array(), Yb, Vb);
planeV.get(yuvData.array(), Yb + Vb, Ub);
// Initialize Renderscript
Type.Builder yuvType = new Type.Builder(rs, Element.YUV(rs))
.setX(image.getWidth())
.setY(image.getHeight())
.setYuvFormat(ImageFormat.YUV_420_888);
final Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
.setX(image.getWidth())
.setY(image.getHeight());
Allocation yuvAllocation = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Allocation rgbAllocation = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
// Convert
yuvAllocation.copyFromUnchecked(yuvData.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.YUV(rs));
scriptYuvToRgb.setInput(yuvAllocation);
scriptYuvToRgb.forEach(rgbAllocation);
// Get the bitmap
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
rgbAllocation.copyTo(bitmap);
// Release
bitmap.recycle();
yuvAllocation.destroy();
rgbAllocation.destroy();
rs.destroy();
image.close();
}
Answering my own question, the actual problem was as I suspected in how I was transforming the Image planes into the ByteBuffer. Next the solution, which should work for both NV21 & YV12. As the YUV data already comes in separate planes, is just a matter of getting it the correct way based on their row and pixel strides. Also needed to do some minor modifications in how the data is passed to the RenderScript intrinsic.
NOTE: For a production optimized onImageAvailable() uninterrupted flow, instead, before doing the conversion the Image byte data should be copied into a separate buffer and the conversion executed in a separate thread (depending on your requirements). But since this isn't part of the question, in the next code the conversion is placed directly into onImageAvailable() to simplify the answer. If anyone needs to know how to copy the Image data please create a new question and let me know so I will share my code.
@Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}
RenderScript support YUV_420_888 as ScriptIntrinsicYuvToRGB's source
create Allocation & ScriptIntrinsicYuvToRGB
RenderScript renderScript = RenderScript.create(this);
ScriptIntrinsicYuvToRGB mScriptIntrinsicYuvToRGB = ScriptIntrinsicYuvToRGB.create(renderScript, Element.YUV(renderScript));
Allocation mAllocationInYUV = Allocation.createTyped(renderScript, new Type.Builder(renderScript, Element.YUV(renderScript)).setYuvFormat(ImageFormat.YUV_420_888).setX(480).setY(640).create(), Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
Allocation mAllocationOutRGB = Allocation.createTyped(renderScript, Type.createXY(renderScript, Element.RGBA_8888(renderScript), 480, 640), Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT);
set Allocation.getSurface() to receive image data from camera
final CaptureRequest.Builder captureRequest = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequest.addTarget(mAllocationInYUV.getSurface());
output to a TextureView or ImageReader or SurfaceView
mAllocationOutRGB.setSurface(new Surface(mTextureView.getSurfaceTexture()));
mAllocationInYUV.setOnBufferAvailableListener(new Allocation.OnBufferAvailableListener() {
@Override
public void onBufferAvailable(Allocation a) {
a.ioReceive();
mScriptIntrinsicYuvToRGB.setInput(a);
mScriptIntrinsicYuvToRGB.forEach(mAllocationOutRGB);
mAllocationOutRGB.ioSend();
}
});
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With