I am currently learning code of one github project ScreenCapture, which can capture the screen and show the image in a surfaceview, here is the project https://github.com/Charlesjean/android-ScreenCapture. I tried to replace the surface of SurfaceView with surface of a ImageReader Object with the code below:
mImgReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.JPEG, 5);
mSurface = mImgReader.getSurface();// mSurfaceView.getHolder().getSurface();
mImgReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
Log.i(TAG, "in OnImageAvailable");
}
}, mHandler);
and create VirtualDisplay like this:
mVirtualDisplay = mMediaProjection.createVirtualDisplay("ScreenCapture",
mWidth, mHeight, mScreenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY |
DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC,
mSurface, new VirtualDisplay.Callback() {
@Override
public void onResumed() {
Log.i(TAG, "onResumed");
super.onResumed();
}
@Override
public void onPaused() {
Log.i(TAG, "onPaused");
super.onPaused();
}
}, mHandler);
but the onImageAvailable
method is never called, does anyone have any experience on this? I could not figure out why this does not work.
Thanks Simon, I solved the problem by change image format to PixelFormat.RGBA_8888, but there is some other points you need to take care when you are doing what I did, I post it here in case it will help someone in the future.
The data buffer of Image.Plane
is not exactly the same as data buffer needed by Bitmap
:
1. The image format we used to create ImageReader
is PixelFormat.RGBA_8888, so the buffer of Image.Plane will place R(ed) channel at first, and then G(reen) channel and so on. In order to convert this buffer to bitmap, we need to create bitmap like this bitmap = Bitmap.createBitmap(metrics,width, height, Bitmap.Config.ARGB_8888);
and the buffer of bitmap need to put Alpha channel at the first.
2. The buffer we got from Image.Plane
have some padding for each row, personally I think this is used by hardware device to accelerate buffer operation or for alignment. So in order to copy this buffer, we need to drop these padding.
To understand the two points, please see the code blow:
public void onImageAvailable(ImageReader reader) {
Log.i(TAG, "in OnImageAvailable");
FileOutputStream fos = null;
Bitmap bitmap = null;
Image img = null;
try {
img = reader.acquireLatestImage();
if (img != null) {
Image.Plane[] planes = img.getPlanes();
if (planes[0].getBuffer() == null) {
return;
}
int width = img.getWidth();
int height = img.getHeight();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * width;
byte[] newData = new byte[width * height * 4];
int offset = 0;
bitmap = Bitmap.createBitmap(metrics,width, height, Bitmap.Config.ARGB_8888);
ByteBuffer buffer = planes[0].getBuffer();
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
int pixel = 0;
pixel |= (buffer.get(offset) & 0xff) << 16; // R
pixel |= (buffer.get(offset + 1) & 0xff) << 8; // G
pixel |= (buffer.get(offset + 2) & 0xff); // B
pixel |= (buffer.get(offset + 3) & 0xff) << 24; // A
bitmap.setPixel(j, i, pixel);
offset += pixelStride;
}
offset += rowPadding;
}
String name = "/myscreen" + count + ".png";
count++;
File file = new File(Environment.getExternalStorageDirectory(), name);
fos = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, fos);
Log.i(TAG, "image saved in" + Environment.getExternalStorageDirectory() + name);
img.close();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != fos) {
try {
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (null != bitmap) {
bitmap.recycle();
}
if (null != img) {
img.close();
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With