On some apps, it is important to handle large images without OOM and also quickly.
For this, JNI (or renderscript, which sadly lacks on documentation) can be a nice solution.
In the past, i've succeeded using JNI for rotating huge bitmaps while avoiding OOM (link here , here and here). it was a nice (yet annoyingly hard) experience, but in the end it worked.
the android framework has plenty of functions to handle bitmaps, but i have no idea what is the situation on the JNI side.
I already know how to pass a bitmap from android's "java world" to the "JNI world" and back.
What i don't know is which functions I can use on the JNI side to help me with bitmaps.
I wish to be able to do all image operations (including decoding) on JNI, so that I won't need to worry about OOM when presented with large images, and in the end of the process, I could convert the data to Java-bitmap (to show the user) and/or write it to a file.
again, i don't want to convert the data on the JNI side to a java bitmap just to be able to run those operations.
As it turns out, there are some libraries that offer many functions (like JavaCV), but they are quite large and I'm not quite sure about their features and if they really do the decoding on the JNI-side, so I would prefer to be able to know what is possible via the built-in JNI function of Android instead.
which functions are available for image manipulation on the JNI side on android?
for example, how could i run face detection on bitmaps, apply matrices, downsample bitmaps, scale bitmaps, and so on... ?
for some of the operations, i can already think of a way to implement them (scaling images is quite easy, and wikipedia can help a lot), but some are very complex.
even if i do implement the operations by myself, maybe others have made it much more efficiently, thinking of the so many optimizations that C/C++ can have.
am i really on my own when going to the JNI side of android, where i need to implement everythign from scratch?
just to make it clear, what i'm interested in is:
input bitmap on java -> image manipulation purely in JNI and C/C++ (no convertion to java objects whatsoever) ->output bitmap on java.
"built-in JNI function of Android" is kind of oxymoron. It's technically correct that many Android Framework Java classes use JNI somewhere down the chain to invoke native libraries.
But there are three reservations regarding this statement.
These are "implementation details", and are subject to change without notice in any next release of Android, or any fork (e.g. Kindle), or even OEM version which is not regarded a "fork" (e.g. built by Samsung, or for Quallcom SOC).
The way native methods are implemented in core Java classes is different from the "classical" JNI. These methods are preloaded and cached by the JVM and are therefore do not suffer from most of the overhead typical for JNI calls.
There is nothing your Java or native code can do to interact directly with the JNI methods of other classes, especially classes that constitute the system framework.
All this said, you are free to study the source code of Android, to find the native libraries that back specific classes and methods (e.g. face detection), and use these libraries in your native code, or build a JNI layer of your own to use these libraries from your Java code.
To give a specific example, face detection in Android is implemented through the android.media.FaceDetector class, which loads libFFTEm.so
. You can look at the native code, and use it as you wish. You should not assume that libFFTEm.so
will be present on the device, or that the library on device will have same API.
But in this specific case, it's not a problem, because all work of neven
is entirely software based. Therefore you can copy this code in its entirety, or only relevant parts of it, and make it part of your native library. Note that for many devices you can simply load and use /system/lib/libFFTEm.so
and never feel discomfort, until you encounter a system that will misbehave.
One noteworthy conclusion you can make from reading the native code, is that the underlying algorithms ignore the color information. Therefore, if the image for which you want to find face coordinates comes from YUV source, you can avoid a lot of overhead if you call
// run detection
btk_DCR_assignGrayByteImage(hdcr, bwbuffer, width, height);
int numberOfFaces = 0;
if (btk_FaceFinder_putDCR(hfd, hdcr) == btk_STATUS_OK) {
numberOfFaces = btk_FaceFinder_faces(hfd);
} else {
ALOGE("ERROR: Return 0 faces because error exists in btk_FaceFinder_putDCR.\n");
}
directly with your YUV (or Y) byte array, instead of converting it to RGB and back to YUV in android.media.FaceDetector.findFaces(). If your YUV buffer comes from Java, you can build your own class YuvFaceDetector
which will be a copy of android.media.FaceDetector with the only difference that YuvFaceDetector.findFaces()
will take Y (luminance) values only instead of a Bitmap, and avoid the RGB to Y conversion.
Some other situations are not as easy as this. For example, the video codecs are tightly coupled to the hardware platform, and you cannot simply copy the code from libstagefright.so to your project. Jpeg codec is a special beast. In modern systems (IIRC, since 2.2), you can expect /system/lib/libjpeg.so
to be present. But many platforms also have much more efficient HW implementations of Jpeg codecs through libstagefright.so
or OpenMAX, and often these are used in android.graphics.Bitmap.compress() and android.graphics.BitmapFactory.decode***() methods.
And there also is an optimized libjpeg-turbo, which has its own advantages over /system/lib/libjpeg.so
.
It seems that your question is more about C/C++ image processing libraries than it is about Android per se. To that end, here are some other StackOverflow questions that might have information you'd find useful:
Fast Cross-Platform C/C++ Image Processing Libraries
C++ Image Processing Libraries
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With