I am retrieving a raw image from a camera and the specs of the image are as follows:
I retrieve the image as a byte array and have an array that is 2400 (1/2 * 80 * 60) bytes long. The next step is to convert the byte array into a Bitmap. I have already used the
BitmapFactory.decodeByteArray(bytes, 0, bytes.length)
but that didn't return a displayable image. I looked at this post and copied the code below into my Android application, but I got a "buffer not large enough for pixels" runtime error.
byte [] Src; //Comes from somewhere...
byte [] Bits = new byte[Src.length*4]; //That's where the RGBA array goes.
int i;
for(i=0;i<Src.length;i++)
{
Bits[i*4] =
Bits[i*4+1] =
Bits[i*4+2] = ~Src[i]; //Invert the source bits
Bits[i*4+3] = -1;//0xff, that's the alpha.
}
//Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(Width, Height, Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(Bits));
At the bottom of the thread, the original poster had the same error I currently have. However, his problem was fixed with the code pasted above. Does anyone have any suggestions on how I should convert the raw image or RGBA array into a Bitmap?
Thanks so much!
UPDATE:
I followed Geobits suggestion and this is my new code
byte[] seperatedBytes = new byte[jpegBytes.length * 8];
for (int i = 0; i < jpegBytes.length; i++) {
seperatedBytes[i * 8] = seperatedBytes[i * 8 + 1] = seperatedBytes[i * 8 + 2] = (byte) ((jpegBytes[i] >> 4) & (byte) 0x0F);
seperatedBytes[i * 8 + 4] = seperatedBytes[i * 8 + 5] = seperatedBytes[i * 8 + 6] = (byte) (jpegBytes[i] & 0x0F);
seperatedBytes[i * 8 + 3] = seperatedBytes[i * 8 + 7] = -1; //0xFF
}
Now, I am able to get a Bitmap using this command
Bitmap bm = BitmapFactory.decodeByteArray(seperatedBytes, 0, seperatedBytes.length);
but the Bitmap has a size of 0KB.
The image I am getting is a raw Image from this camera. Unfortunately, retrieving a pre-compressed JPEG image is not an option becuase I need 4-bit grayscale.
If the image coming in is only in 2400 bytes, that means there are two pixels per byte(4 bits each). You're only giving the byte buffer 2400 * 4 = 9600
bytes when an ARGB_8888 needs 4 bytes per pixel, or 60 * 80 * 4 = 19200
.
You need to split each incoming byte into an upper/lower nibble value, then apply that to the following 8 bytes(excluding alpha). You can see this answer for an example of how to split bytes.
Basically:
i
into two nibbles, ia
and ib
ia
to outgoing bytes i*8
through (i*8)+2
ib
to outgoing bytes (i*8)+4
through (i*8)+6
(i*8)+3
and (i*8)+7
are alpha (0xFF
)Once you have the right size byte buffer, you should be able to use decodeByteArry()
with no problems.
If I understand you correctly you have a byte array, whose bytes contain 2 greyscale values each.
I'd say a greyscale value can be considered as a simple intensity value, can't it?
So the first thing you need to do is to seperate your greyscale values into single bytes each, as BitmapFactory.decodeByteArray can probably not handle your "half bytes". This can be easily done by bit-operations. To obtain the first 4-Bit of your byte `0bxxxxxxxx` value you need to right shift 4 times: `0bxxxxxxxx >> 4` which would lead to `0b0000xxxx`. The second value can be obtained by a bitwise or with the pattern `0b00001111`: `0b00001111 ^ 0bxxxxxxxx` wich would lead to `0b0000xxxx`. (for detailed information about bit-operations see here: Bit-Operations)
Those two values can now be stored into a new byte array.
If you do this for every pair of half-bytes you'll get an array of full bytes and doubled size at the end.
I'm not sure if this is already enough for 'BitmapFactory.decodeByteArray' or if you have to repeat every byte 4 times for each RGBA-Channel, which would increase the size of your byte-array again by four times of the original size.
I hope I did not missunderstand anything and my suggestions helps ;)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With