I have implemented an openCV applicationWhere I use SURF descriptor. It is working fine the code looks like this:
I reduce the input video stream size to speed it up
capture.set(Highgui.CV_CAP_PROP_FRAME_WIDTH, display.getWidth());
capture.set(Highgui.CV_CAP_PROP_FRAME_HEIGHT, display.getHeight());
capture.retrieve(mRgba, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
try{
//-- Step 1: Detect the keypoints using SURF Detector
surfDetector.detect( mRgba, vector1 );
for (KeyPoint t : vector1)
Core.circle(mRgba, t.pt, 10, new Scalar(100, 100,100));
//-- Step 2: Calculate descriptors (feature vectors)
//extractor.compute(mRgba, vector1, descriptor1);
//-- Draw matches
//Mat img_matches;
//drawMatches( mRgba, vector1, mRgba, vector1, matches, img_matches );
}catch(Exception e){
Log.e( "ERROR", e.toString());
}
But the calculation is still way too slow, so I need to find another method to reduce input video stream qualllity. Or If you know another method to speed it up feel free to share it with me ;)
Thanks for your time & answers
After successfully creating an Android project, it is time to import the OpenCV module into your Android project. Click on File -> New -> Import Module… It should bring up a popup like the image below where you can select the path to the module you want to import.
Basically OpenCV is an open source computer vision and machine learning software library. Based on the documentation this library has more than 2500 optimized algorithm inside of it.
But the calculation is still way too slow, so I need to find another method to reduce input video stream qualllity.
The real answer to this question is much closer to "there isn't much you can do!" than to anything else. We have to acknowledge that mobile phones do not have yet strong processing capabilities like any desktop. The majority of Android phones in the world are still using previous versions of the system and most important of all: they are single-core devices, they are clocked at speeds lower than 1GHz, they have limited memory, bla bla...
Nevertheless, there is always something you can do to improve speed with little changes in performance.
Now, I am also computing OpenCV SURF on the GalaxyS and I have a frame rate of 1.5 fps for 200 features with hessian threshold at 1500 in a 320x240 image. I admit it is crappy performance, but in my case I only have to compute features every once in a while, since I am measuring optical flow for tracking purposes. However, it is very weird that you can only get only 1 frame every 4-5 seconds.
First, it seems to me that you are using VideoCapture to obtain the camera frames. Well, I am not. I am using the Android camera implementation. I did not check how VideoCapture is implemented in the Java port of OpenCV, but it appears to be slower than using the implementation in some of the tutorials. However, I can't be 100% sure about this, since I haven't tested it. Did you?
Reduce native calls to the minimum possible. Java OpenCV native calls are time-expensive. Also, follow all the guidelines specified in the Android-OpenCV best practices page. If you have multiple native calls, join them all in a single JNI call.
You should also reduce the image size and increase the SURF hessian threshold. This will, however, reduce the number of detected features, but they will be stronger and more robust for the purpose of recognition and matching. You are right when you say that the SURF is the more robust detector (it also is the slowest, and it is patented). But, if this is not a dead lock for you, I would recommend to use the new ORB detector, a variant of BRIEF which performs better in terms of rotation. ORB has disadvantages though, such as the limited number of detected keypoints and bad scale-invariance. This is a very interesting feature detector algorithms comparison report. It also suggests SURF detector is slower in the new OpenCV 2.3.1 version, probably due to some changes in the algorithm, for increased robustness.
Now the fun bits. The ARM processor architecture (in which most of the Android phones are based) has been widely reported for its slowness handling floating point calculations, in which feature detector algorithms rely heavily. There have been very interesting discussions about this issue, and many say you should use fixed-point calculations whenever possible. The new armv7-neon architecture provides faster floating point calculations, but not all devices support it. To check if your device does support it, run adb shell cat proc/cpuinfo
. You can also compile your native code with NEON directives (LOCAL_ARM_NEON := true
) but I doubt this will do any good, since apparently few OpenCV routines are NEON optimized. So, the only way to increase speed with this, is to rebuild the code with NEON intrinsics (this is completely unexplored ground for me, but you might find it worth looking). In the android.opencv group it was suggested that future OpenCV releases will have more NEON-optimized libraries. This could be interesting, however I am not sure if it is worth working on it or wait for faster CPUs and optimized systems using GPU computing. Note that Android systems < 3.0 do not use built-in hardware acceleration.
If you are doing this for academic purposes, convince your university to buy you a better device ^^. This might ultimately be the best option for faster SURF feature detection. Another option is to rewrite the algorithms. I am aware some guys in the Intel labs did it, with some success but, obviously they won't share it. Honestly, after investigating this issue for a few weeks, I realised that for my specific needs, (and since I am no computer science engineer nor an algorithms expert) there is more value on waiting a few months for better devices, than banging my head on the wall dissecting the algorithms and developing near-assembly code.
Do you need to use the SURF feature/descriptor for your application? SURF is attractive as it matches very nicely, but as you've found out it is somewhat slow. If you're just tracking points through a video you could make the assumption that points will not vary much frame-to-frame and so you could detect and match Harris/FAST corners and then filter matches to be valid only if they're within an x-pixel radius of the original point?
OpenCV has an (albeit somewhat limited) selection of feature detectors and descriptor extractors and descriptor matchers, it would be worth investigating the options if you've not already.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With