I want to do camera image processing on the GPU on android.
In my current set-up I use a SurfaceTexture to capture frames from the camera image stream as an OpenGL ES texture. This is an efficient way to get the camera stream accesible in my shaders. (http://developer.android.com/reference/android/graphics/SurfaceTexture.html)
Now i would like to start using the new RenderScript API instead of direct OenGL ES usage. (http://developer.android.com/guide/topics/renderscript/index.html)
But to create a SurfaceTexture, i need to pass the openGl Texture ID to the constructor. Unfortunately the texture ID is not available (RenderScript uses the Allocation class to load textures, which does not expose the texture ID). So i am not able to create a SurfaceTexture when using RenderScript.
I have read all documentation on renderscript (which is still pretty sparse) and looked at the samples, but they have no information on the subject.
So my question is: Is it possible to use SurfaceTexture in combination with RenderScript, or is there some other efficient way to use the live camera stream in a RenderScript Graphics script?
If I understand correctly, you already use SurfaceTexture
. You can then register a callback with setOnFrameAvailableListener.
I see two solutions :
Implements you own RSTextureView
, which inherits from SurfaceTexture.OnFrameAvailableListener. Register your view as the SurfaceTexture
callback.
Every time your surface view is updated by the camera stream, your RSTextureView
will be notified and you can handle it the way you want.
Another solution would be to implement your own RenderScriptGL
(still inheriting from SurfaceTexture.OnFrameAvailableListener
) and call setSurfaceTexture when the callback is called.
You should be able to combine RenderScript
with a SurfaceTexture
with at least one of these two solutions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With