problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples.
I'm wondering if there is a way to implement those steps:
I know there is something like an Marker-Detection API included in the latest build of the TangoSDK. But this technology is limited to a small amount of devices (two to be exact...).
best regards and thanks in advance for any idea
Both kits have their strengths. For example, ARKit is better for image recognition and specific iOS tasks, while ARCore is better for general graphics manipulation and gaming. However, both are widely used and will continue to develop alongside the hardware they serve.
ARCore can respond to and track images that are: Images that are fixed in place, such as a print hanging on a wall or a magazine on a table. Moving images, such as an advertisement on a passing bus or an image on a flat object held by the user as they move their hands around.
ARCore uses an algorithm called Concurrent Odometry and Mapping (COM) to understand where the phone is relative to the world around it.
Marker based AR, also known as image recognition AR, requires a trigger photo or QR code to activate the AR experience. The user can scan the marker using their phone camera and the digital experience will appear on top of the marker. This allows the user to move around the marker and see the digital experience in 3D.
I am also interested in that topic. I think the true power of AR can only be unleashed when paired with environment understanding.
I think you have two options:
However Apple have got it sorted: https://youtu.be/E2fd8igVQcU?t=2m58s
if using Google Tango, you can implement this using the built in Area Descriptions File (ADF) system. The system has a holding screen and you are told to "walk around". Within a few seconds, you can relocalise to an area the device has previously been. (or pull the information from a server etc..)
Googles VPS (Visual Positioning Service) is a similar Idea, (closed Beta still) which should come to ARCore. It will, as far as I understand, allow you to localise a specific location using the camera feed from a global shared map of all scanned locations. I think, when released, it will try to fill the gap of an AR Cloud type system, which will solve these problems for regular developers.
See https://developers.google.com/tango/overview/concepts#visual_positioning_service_overview
The general problem of relocalising to a space using pre-knowledge of the space and camera feed only is solved in academia and other AR offerings, hololens etc... Markers/Tags aren't required. I'm unsure, however, which other commercial systems provide this feature.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With