I wrote a conversion from YUV_420_888 to Bitmap, considering the following logic (as I understand it):
To summarize the approach: the kernel’s coordinates x and y are congruent both with the x and y of the non-padded part of the Y-Plane (2d-allocation) and the x and y of the output-Bitmap. The U- and V-Planes, however, have a different structure than the Y-Plane, because they use 1 byte for coverage of 4 pixels, and, in addition, may have a PixelStride that is more than one, in addition they might also have a padding that can be different from that of the Y-Plane. Therefore, in order to access the U’s and V’s efficiently by the kernel I put them into 1-d allocations and created an index “uvIndex” that gives the position of the corresponding U- and V within that 1-d allocation, for given (x,y) coordinates in the (non-padded) Y-plane (and, so, the output Bitmap).
In order to keep the rs-Kernel lean, I excluded the padding area in the yPlane by capping the x-range via LaunchOptions (this reflects the RowStride of the y-plane which thus can be ignored WITHIN the kernel). So we just need to consider the uvPixelStride and uvRowStride within the uvIndex, i.e. the index used in order to access to the u- and v-values.
This is my code:
Renderscript Kernel, named yuv420888.rs
#pragma version(1) #pragma rs java_package_name(com.xxxyyy.testcamera2); #pragma rs_fp_relaxed int32_t width; int32_t height; uint picWidth, uvPixelStride, uvRowStride ; rs_allocation ypsIn,uIn,vIn; // The LaunchOptions ensure that the Kernel does not enter the padding zone of Y, so yRowStride can be ignored WITHIN the Kernel. uchar4 __attribute__((kernel)) doConvert(uint32_t x, uint32_t y) { // index for accessing the uIn's and vIn's uint uvIndex= uvPixelStride * (x/2) + uvRowStride*(y/2); // get the y,u,v values uchar yps= rsGetElementAt_uchar(ypsIn, x, y); uchar u= rsGetElementAt_uchar(uIn, uvIndex); uchar v= rsGetElementAt_uchar(vIn, uvIndex); // calc argb int4 argb; argb.r = yps + v * 1436 / 1024 - 179; argb.g = yps -u * 46549 / 131072 + 44 -v * 93604 / 131072 + 91; argb.b = yps +u * 1814 / 1024 - 227; argb.a = 255; uchar4 out = convert_uchar4(clamp(argb, 0, 255)); return out; }
Java side:
private Bitmap YUV_420_888_toRGB(Image image, int width, int height){ // Get the three image planes Image.Plane[] planes = image.getPlanes(); ByteBuffer buffer = planes[0].getBuffer(); byte[] y = new byte[buffer.remaining()]; buffer.get(y); buffer = planes[1].getBuffer(); byte[] u = new byte[buffer.remaining()]; buffer.get(u); buffer = planes[2].getBuffer(); byte[] v = new byte[buffer.remaining()]; buffer.get(v); // get the relevant RowStrides and PixelStrides // (we know from documentation that PixelStride is 1 for y) int yRowStride= planes[0].getRowStride(); int uvRowStride= planes[1].getRowStride(); // we know from documentation that RowStride is the same for u and v. int uvPixelStride= planes[1].getPixelStride(); // we know from documentation that PixelStride is the same for u and v. // rs creation just for demo. Create rs just once in onCreate and use it again. RenderScript rs = RenderScript.create(this); //RenderScript rs = MainActivity.rs; ScriptC_yuv420888 mYuv420=new ScriptC_yuv420888 (rs); // Y,U,V are defined as global allocations, the out-Allocation is the Bitmap. // Note also that uAlloc and vAlloc are 1-dimensional while yAlloc is 2-dimensional. Type.Builder typeUcharY = new Type.Builder(rs, Element.U8(rs)); //using safe height typeUcharY.setX(yRowStride).setY(y.length / yRowStride); Allocation yAlloc = Allocation.createTyped(rs, typeUcharY.create()); yAlloc.copyFrom(y); mYuv420.set_ypsIn(yAlloc); Type.Builder typeUcharUV = new Type.Builder(rs, Element.U8(rs)); // note that the size of the u's and v's are as follows: // ( (width/2)*PixelStride + padding ) * (height/2) // = (RowStride ) * (height/2) // but I noted that on the S7 it is 1 less... typeUcharUV.setX(u.length); Allocation uAlloc = Allocation.createTyped(rs, typeUcharUV.create()); uAlloc.copyFrom(u); mYuv420.set_uIn(uAlloc); Allocation vAlloc = Allocation.createTyped(rs, typeUcharUV.create()); vAlloc.copyFrom(v); mYuv420.set_vIn(vAlloc); // handover parameters mYuv420.set_picWidth(width); mYuv420.set_uvRowStride (uvRowStride); mYuv420.set_uvPixelStride (uvPixelStride); Bitmap outBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); Allocation outAlloc = Allocation.createFromBitmap(rs, outBitmap, Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT); Script.LaunchOptions lo = new Script.LaunchOptions(); lo.setX(0, width); // by this we ignore the y’s padding zone, i.e. the right side of x between width and yRowStride //using safe height lo.setY(0, y.length / yRowStride); mYuv420.forEach_doConvert(outAlloc,lo); outAlloc.copyTo(outBitmap); return outBitmap; }
Testing on Nexus 7 (API 22) this returns nice color Bitmaps. This device, however, has trivial pixelstrides (=1) and no padding (i.e. rowstride=width). Testing on the brandnew Samsung S7 (API 23) I get pictures whose colors are not correct - except of the green ones. But the Picture does not show a general bias towards green, it just seems that non-green colors are not reproduced correctly. Note, that the S7 applies an u/v pixelstride of 2, and no padding.
Since the most crucial code line is within the rs-code the Access of the u/v planes uint uvIndex= (...) I think, there could be the problem, probably with incorrect consideration of pixelstrides here. Does anyone see the solution? Thanks.
UPDATE: I checked everything, and I am pretty sure that the code regarding the access of y,u,v is correct. So the problem must be with the u and v values themselves. Non green colors have a purple tilt, and looking at the u,v values they seem to be in a rather narrow range of about 110-150. Is it really possible that we need to cope with device specific YUV -> RBG conversions...?! Did I miss anything?
UPDATE 2: have corrected code, it works now, thanks to Eddy's Feedback.
For people who encounter error
android.support.v8.renderscript.RSIllegalArgumentException: Array too small for allocation type
use buffer.capacity()
instead of buffer.remaining()
and if you already made some operations on the image, you'll need to call rewind()
method on the buffer.
Look at
floor((float) uvPixelStride*(x)/2)
which calculates your U,V row offset (uv_row_offset) from the Y x-coordinate.
if uvPixelStride = 2, then as x increases:
x = 0, uv_row_offset = 0 x = 1, uv_row_offset = 1 x = 2, uv_row_offset = 2 x = 3, uv_row_offset = 3
and this is incorrect. There's no valid U/V pixel value at uv_row_offset = 1 or 3, since uvPixelStride = 2.
You want
uvPixelStride * floor(x/2)
(assuming you don't trust yourself to remember the critical round-down behavior of integer divide, if you do then):
uvPixelStride * (x/2)
should be enough
With that, your mapping becomes:
x = 0, uv_row_offset = 0 x = 1, uv_row_offset = 0 x = 2, uv_row_offset = 2 x = 3, uv_row_offset = 2
See if that fixes the color errors. In practice, the incorrect addressing here would mean every other color sample would be from the wrong color plane, since it's likely that the underlying YUV data is semiplanar (so the U plane starts at V plane + 1 byte, with the two planes interleaved)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With