Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to pass Camera preview to the Surface created by MediaCodec.createInputSurface()?

Ideally I'd like to accomplish two goals:

  1. Pass the Camera preview data to a MediaCodec encoder via a Surface. I can create the Surface using MediaCodec.createInputSurface() but the Camera.setPreviewDisplay() takes a SurfaceHolder, not a Surface.
  2. In addition to passing the Camera preview data to the encoder, I'd also like to display the preview on-screen (so the user can actually see what they are encoding). If the encoder wasn't involved then I'd use a SurfaceView, but that doesn't appear to work in this scenario since SurfaceView creates its own Surface and I think I need to use the one created by MediaCodec.

I've searched online quite a bit for a solution and haven't found one. Some examples on bigflake.com seem like a step in the right direction but they take an approach that adds a bunch of EGL/SurfaceTexture overhead that I'd like to avoid. I'm hoping there is a simpler example or solution where I can get the Camera and MediaCodec talking more directly without involving EGL or textures.

like image 956
Andrew Cottrell Avatar asked Oct 28 '13 19:10

Andrew Cottrell


1 Answers

As of Android 4.3 (API 18), the bigflake CameraToMpegTest approach is the correct way.

The EGL/SurfaceTexture overhead is currently unavoidable, especially for what you want to do in goal #2. The idea is:

  • Configure the Camera to send the output to a SurfaceTexture. This makes the Camera output available to GLES as an "external texture".
  • Render the SurfaceTexture to the Surface returned by MediaCodec#createInputSurface(). That feeds the video encoder.
  • Render the SurfaceTexture a second time, to a GLSurfaceView. That puts it on the display for real-time preview.

The only data copying that happens is performed by the GLES driver, so you're doing hardware-accelerated blits, which will be fast.

The only tricky bit is you want the external texture to be available to two different EGL contexts (one for the MediaCodec, one for the GLSurfaceView). You can see an example of creating a shared context in the "Android Breakout game recorder patch" sample on bigflake -- it renders the game twice, once to the screen, once to a MediaCodec encoder.

Update: This is implemented in Grafika ("Show + capture camera").

Update: The multi-context approach in "show + capture camera" approach is somewhat flawed. The "continuous capture" Activity uses a plain SurfaceView, and is able to do both screen rendering and video recording with a single EGL context. This is recommended.

like image 188
fadden Avatar answered Oct 18 '22 03:10

fadden