From experiments and from reading other posts like this one it seems that it's hard to process high resolution images on Android because there is a limit on how much memory the VM will allow to allocate.
Loading a 8MP camera pictures takes around 20 MB of memory.
I understand that the easy solution is to downsample the image when loading it (BitmapFactory offers such an option) but I still would like to process the image in full resolution: the camera shoots 8MP, why would I only use 4MP and reduce the quality.
Does anyone know good workarounds for that?
In a resource-constrained environment I think that your only solution is to divide and conquer: e.g. caching/tiling (as in: tiles)
Instead of loading and processing the image all at once you load/save manageable chunks of the image from a raw data file to do your processing. This is not trivial and could get really complex depending on the type of processing you want to do, but it's the only way if you don't want to comprise on image quality.
Indeed, this is hard. But in case image is in some continuous raster format, you can mmap it ( see java.nio.ByteBuffer ) - this way you get byte buffer without allocating it.
2 things:
Checkout the gallery in Honeycomb. It does this tiled based rendering. You can zoom in on an image and you see then that the current part is higher res then the other parts. If you pan around you see it rendering.
When using native code (NDK) there is not a resource limit. So you could try to load all the data native and somehow get parts of it using JNI, but I doubt it's better then the gallery of honeycom.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With