After discovering the power of OpenCV, I decided to use that library to develop the natural marker tracking engine that I am working on now. But my problem is I have no idea of a proper approach to the implementation of such tracker.
I have devised the following plan:
I tried the SIFT and SURF algorithm in describing and extracting key points and the end result is super low fps for both algorithm (i.e. less than 0 fps). I notice that SIFT and SURF are quite computationally expensive and will it be suitable for such tracking on a live camera feed?
Thanks.
Augmented Reality (AR) is the imposing of digitally generated images into a viewer's real-world surroundings. Unlike Virtual Reality, which creates a completely artificial environment, AR uses the existing environment and overlays it with new information.
Markerless AR merges digital data with input from real-time, real-world inputs registered to a physical space. The technology combines software, audio, and video graphics with a smartphone's or headset's cameras, gyroscope, accelerometer, haptic sensors, and location services to register 3D graphics in the real world.
AR incorporates three features: a combination of digital and physical worlds, interactions made in real time, and accurate 3D identification of virtual and real objects.
Developing such a markers requires you to have a deep knowledge of Image Processing, 3D Imaging, Tracking, etc. Is not like developing a simple application.
Is better to use developed ones ;)
FERNS is a much efficient and simpler than SIFT. You can use it. It was developed by researches at EPFL. If you read AR/Tracking papers you will see these guys are the leaders of the industry/field. It is also implemented in later Versions of OpenCV (I think in 2.1 or 2.2?)
Otherwise you can always get the source code for that algorithm from here: Ferns: Planar Object Detection
EDIT:
Basically algorithms like FERNS will tell you the position/rotation,etc (this is changes are represented by a matrix called Homography) a certain surface will take with reference to another frame. This Homography is everything you need for 3D rendering ;)
Using OpenGL or alike 3D libraries you draw the object using the calculated Homography. If you repeat this process for each frame you will have a simple AR Application.
Theory Books on: Image Processing and 3D Imaging
For understanding AR read: ARToolKit paper
More on FERNS: oezuysal'site
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With