Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using renderscript for processing and mediacodec for encoding

I am trying to develop a camera app that does some video processing before recording the video. I have decided to use Rendrscript for the processing since it provides many of the operations that I want to use. And I want to use MediaCodec for encoding. I have found a few samples(including Grafika) that shows how to use GLES for processing but I haven't found a sample that shows how to do it with RenderScript. Trying to replace GLES with Renderscript I have the following questions:

  1. I create RenderScript output Allocation from Encoder input surface. In the Grafika sample EGL swapbuffer() is used to send buffer to encoder. Does Allocation.ioSend() do the same thing?
  2. In EGL setPresentationTime() is used to set the time-stamp. How do I set the time-stamp in Renderscript's Allocation?
  3. Should I be using MediaCodec.queueInputBuffer() instead to submit input buffer and time-stamp? In that case should I still call Allocation.ioSend() before calling queueInputBuffer?
like image 629
Phyxle Avatar asked Jan 28 '15 06:01

Phyxle


1 Answers

I came across this same issue and the solution I use is to set the timestamp via EGL, similar to RecordFBOActivity#doFrame. In order to do this an intermediate Allocation is used to bridge the gap between RenderScript and OpenGL/EGL.

Let's view the data flow as a processing pipeline with stages.

Original Pipeline

[Camera]
   --> [ImageAllocation]
       --> [RenderScript]
           --> [MediaCodecSurfaceAllocationForEncoder]
               --> [MediaCodec]

In original pipeline all buffers are RS allocations.

MediaCodecSurfaceAllocation is based on the Surface returned from encoder, i.e., MediaCodec#getSurface().

New Pipeline

[Camera]
    --> [ImageAllocation]
        --> [RenderScript]
            --> [IntermediateAllocation]
                --> [EglWindowSurfaceForEncoder]
                    --> [MediaCodec]

In new pipeline there are two big changes, IntermediateAllocation and EglWindowSurfaceForEncoder

IntermediateAllocation is a SurfaceTexture-based Allocation, similar to the ful screen texture blitter used in CameraCaptureActivity.

EglWindowSurfaceForEncoder wraps the encoder's input surface, similar to RecordFBOActivity#startEncoder

The key here is to set your own OnFrameAvailableListener.

Setup Code

void setup() {
    mEglWindowSurfaceForEncoder= new WindowSurface(mEglCore, encoderCore.getInputSurface(), true);

    mFullScreen = new FullFrameRect(
            new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
    mTextureId = mFullScreen.createTextureObject();
    mSurfaceTexture = new SurfaceTexture(mTextureId);    

    Type renderType = new Type.Builder(renderScript, Element.RGBA_8888(renderScript))
        .setX(width)
        .setY(height)
        .create();

    mIntermediateAllocation = Allocation.createTyped(
        renderScript,
        renderType,
                                                               Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT);

    mIntermediateAllocation .setSurface(surface);

    mAllocationFromCamera = ...
}

OnNewCameraImage

mIntermediateAllocation.copyFrom(mAllocationFromCamera);

OnFrameAvailableListener

mSurfaceTexture.setOnFrameAvailableListener(
    new SurfaceTexture.OnFrameAvailableListener() {
        public void onFrameAvailableListener(SurfaceTexture) {

             //latch the image data from camera
             mSurfaceTexture.updateTexImage();

             // Draw the frame.
             mSurfaceTexture.getTransformMatrix(mSTMatrix);
             mFullScreen.drawFrame(mTextureId, mSTMatrix);

             // latch frame to encoder input
             mEglWindowSurfaceForEncoder.setPresentationTimes(timestampNanos);
             mEglWindowSurfaceForEncoder.swapBuffers();

        }        
    }
}

The above code must run in EGL context (ie on the OpenGl rendering thread).

like image 191
Peter Tran Avatar answered Oct 02 '22 16:10

Peter Tran