I need to be able to execute native code ( algorithms ) on a video stream generated by the device camera, for that, I have considered OpenCV camera capture, Unfortunetly, at this time, it doesn't support Android 4.0.3 @ Samsung G2 which is my target device, as an alternative, I consider capture using the Java Camera object, and, using JNI, to ~Marshal~ the captured data to the native domain, this however, impose marshaling overhead ( coping the data ), to avoid that, I have considered rendering the captured image ( preview ) to a GL Texture ( using eg. Camera.setPreviewTexture ), and, directly accessing the GL Texture @ the native domain, and thus, avoiding the un-needed coping.
Is there a way to directly access the TextureSurface @ the native domain ?
Any help will be appreciated.
Nadav At Sophin
The VideoPlayback sample app released as part of Qualcomm's Vuforia augmented reality SDK achieves this I think. I've only just started going over the code myself these past few days and a lot of it's new to me so I'm not 100% sure.
https://ar.qualcomm.at/content/video-playback-sample-app-posted
If I'm right, the app plays a movie file through the Java domain's MediaPlayer class which is rendered to a SurfaceTexture which is then in turn accessed by the OpenGL ES code in the native domain to be rendered on the actual augmented reality display.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With