According to the android camera docs from the Java SDK side, the camera preview frames have to be supplied a (visible and active) surface to be drawn to in order to access the frame data. I have linked a few of the things I have come across here (I'm new so capped at 2 hyperlinks), but I went over tons of documentation of various things before winding up posting my own question here on SO.
a) I explicitly don't want to draw the camera preview to the screen, I just want the byte data (which I can get from here) straight from the camera buffer if possible.
b) Yes, I saw this: Taking picture from camera without preview.
However, this dictates that apps using this library have to insert this (seemingly) arbitrary view into their app layout, which has to be visible all the time (they can't switch layouts, change visibility of parent containers or use a subactivity) during app life-cycle.
In fact, my needs are similar to this poster's, except I want continuous real-time stream of the camera preview data, not a capture saved to image on disk. Hence PictureCallback
works for him together with the myCamera.takePicture
call. For obvious reasons, writing continuous captures to disk is not a solution for me, so that won't work in my case. myCamera.takePicture
is also much slower than getting the preview frames.
c) I have started dabbling with the NDK, and have gotten a pretty good feel for it. However, according to this accessing camera data via native is just not supported or recommended, and a huge hassle for device compatibility even then.
If this is outdated, and there are solid NDK routes to acquiring camera data from android devices, I couldn't find them, so if you could point them out to me that would be great.
d) I want to make this library accessible from tools like Unity (in the form of a unity plugin), for which I want to be able to just compile it into a JAR (or .so for native) and expect it to work on android apps that import the library that way so that they can use the camera without specific UI/layout configurations needed from the app developer.
I want to be able to create a vision-processing library for use in android apps, and don't want to limit the apps using the library to have to use specific app layouts and draw specific views to the screen in order to use the vision processing results - for a very simple example, if an app wants to use my library to get the average color of what the camera sees, and wants to tint an image on the screen that color.
Whatever suggestions I can get towards any of the points I have will be super-helpful. Thanks a lot for your time!
I completely forgot I had this question up. 2 years and a couple of Android SDK versions later, we have a working system.
We're using an extended SurfaceTexture
, cameraSurface
with a reference to the required camera. Using cameraSurface's SurfaceTexture
, we call
mCamera.setPreviewTexture(mSurfaceTexture);
mCamera.startPreview();
Then override the current active camera's previewCallback
from your activity or wherever you need it.
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(final byte[] data, final Camera camera) {
// Process the contents of byte for whatever you need
}
});
This allows you to continuously process what the camera sees, rather than just having to go through stored images. Of course, this will be limited to your camera's preview resolution (which may be different from the still capture or video resolutions).
If I get the time, I'll try to throw up a barebones working demo.
Edit: The SDK I had linked to is no longer freely available. You can still request one via the BitGym contact page.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With