I am doing an Android project about dealing with video frame, I need to handle every frame before display it. The process includes scaling up frames from 1920x1080 to 2560x1440 resolution, color space conversion and some necessary image processing based on RGB, and all these works should be finished within 33ms~40ms.
I have optimized the yuv->rgb and other processing with arm neon, they worked well. But I have to scale up frame firstly from 1080p to 2k resolution, it's the bottleneck of performance now.
My question is how to efficiently scale up image from 1080p to 2k resolution within 20ms, I don't have much experience about scaling algorithm, so any suggestions are helpful. Could I use arm neon to optimize the existing algorithm?
The hardware environment:
Update:
I will describe my decoding process, I use MediaCodec to decode the normal video(H.264) to YUV(NV12), the default decoder is hardware, it's very fast. Then I use arm neon to convert NV12 to RGBW, and then send RGBW frame to surfaceflinger to display. I just use normal SurfaceView rahter than GLSurfaceView.
The bottleneck is how to scale up YUV from 1080p to 2K fast.
I find that examples work well, so allow me to lead with this example program that uses OpenGL shaders to convert from YUV -> RGB: http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c
What I envision for your program is:
There might be a bit of a learning curve involved here, but I think it's workable, given your problem description. Take it one step at a time: get the OpenGL framework in place, try uploading just the Y texture and writing a naive fragment shader that just emits a grayscale pixel based on the Y sample, then move onto correctly converting the image, then get a really naive upsampler working, then put a more sophisticated upsampler into service.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With