Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to efficiently scale up video frame using NDK

I am doing an Android project about dealing with video frame, I need to handle every frame before display it. The process includes scaling up frames from 1920x1080 to 2560x1440 resolution, color space conversion and some necessary image processing based on RGB, and all these works should be finished within 33ms~40ms.

I have optimized the yuv->rgb and other processing with arm neon, they worked well. But I have to scale up frame firstly from 1080p to 2k resolution, it's the bottleneck of performance now.

My question is how to efficiently scale up image from 1080p to 2k resolution within 20ms, I don't have much experience about scaling algorithm, so any suggestions are helpful. Could I use arm neon to optimize the existing algorithm?

The hardware environment:

  • CPU: Samsung Exynos 5420
  • Memory: 3GB
  • Display: 2560X1600 px

Update:

I will describe my decoding process, I use MediaCodec to decode the normal video(H.264) to YUV(NV12), the default decoder is hardware, it's very fast. Then I use arm neon to convert NV12 to RGBW, and then send RGBW frame to surfaceflinger to display. I just use normal SurfaceView rahter than GLSurfaceView.

The bottleneck is how to scale up YUV from 1080p to 2K fast.

like image 526
NicotIne Avatar asked Nov 10 '22 06:11

NicotIne


1 Answers

I find that examples work well, so allow me to lead with this example program that uses OpenGL shaders to convert from YUV -> RGB: http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c

What I envision for your program is:

  1. Hardware video decodes H.264 stream -> YUV array
  2. Upload that YUV array as a texture to OpenGL; actually, you will upload 3 different textures-- Y, U, and V
  3. Run a fragment shader that converts those Y, U, and V textures into an RGB(W) image; this will produce a new texture in video memory
  4. Run a new fragment shader against the texture generated in previous step in order to scale the image

There might be a bit of a learning curve involved here, but I think it's workable, given your problem description. Take it one step at a time: get the OpenGL framework in place, try uploading just the Y texture and writing a naive fragment shader that just emits a grayscale pixel based on the Y sample, then move onto correctly converting the image, then get a really naive upsampler working, then put a more sophisticated upsampler into service.

like image 141
Multimedia Mike Avatar answered Nov 15 '22 00:11

Multimedia Mike