Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to handle memory limitation of GPU for high resolution image processing on GPU?

I am making a camera app and this app will be provide some filters to users. Currently my code is on NDK and it works Ok, however i want to make it a bit faster. It seems GPU and opengl Es 2.0 is way to go. My only concern with GPU is that their memory limitation. Since modern cameras take 5-10 mp images while GPU memory limitation is far less than that. I was wondering if there is a way to work around that limitation. The only logical option seems to me is dividing the image smaller part, and then processing them on GPU and finally assamble them to final image. My question is, if this approach still would be good for performance and also is there any other option to image processing high resolution images on mobile GPUs.

Edit: I need to clarify that i want to use GPU for image processing so my goal is not to render results to screen. I will render it to another texture and save it to disk.

like image 964
dirhem Avatar asked Oct 02 '12 11:10

dirhem


1 Answers

Your tiling idea has been used since the Nintendo Entertainment System, which has a Ricoh 2A03 at 1.79 MHz, so it's a good approach. Google uses tiling to manage map displays, and even games like Crysis tend to limit their texture sizes of most of their textures to something like 1024x1024 (1 megapixel). And yep, 10 megapixels requires 30 megs of RAM, so some devices may have problems, especially since if you use a source and a dest texture, that would mean 60 megs of RAM are required.

Just keep in mind that texture sizes tend to use a power of 2 ( 2, 4, 8, 16, 32, 64, etc). You'll sometimes get better quality at least if you chop it up and tile up the image.

like image 163
Joe Plante Avatar answered Sep 19 '22 05:09

Joe Plante