Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Marker based initial positioning with ARCore/ARKit?

problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples.

I'm wondering if there is a way to implement those steps:

  1. Detect marker on the table
  2. Use the position of the marker as the initial position of the AR-Visualization and go on with SLAM-Tracking

I know there is something like an Marker-Detection API included in the latest build of the TangoSDK. But this technology is limited to a small amount of devices (two to be exact...).

best regards and thanks in advance for any idea

like image 843
user2463728 Avatar asked Dec 14 '17 10:12

user2463728


People also ask

Which is better ARCore or ARKit?

Both kits have their strengths. For example, ARKit is better for image recognition and specific iOS tasks, while ARCore is better for general graphics manipulation and gaming. However, both are widely used and will continue to develop alongside the hardware they serve.

Does ARCore support image tracking?

ARCore can respond to and track images that are: Images that are fixed in place, such as a print hanging on a wall or a magazine on a table. Moving images, such as an advertisement on a passing bus or an image on a flat object held by the user as they move their hands around.

What algorithm is used by ARCore?

ARCore uses an algorithm called Concurrent Odometry and Mapping (COM) to understand where the phone is relative to the world around it.

What is Marker AR?

Marker based AR, also known as image recognition AR, requires a trigger photo or QR code to activate the AR experience. The user can scan the marker using their phone camera and the digital experience will appear on top of the marker. This allows the user to move around the marker and see the digital experience in 3D.


2 Answers

I am also interested in that topic. I think the true power of AR can only be unleashed when paired with environment understanding.

I think you have two options:

  1. wait for the new Vuforia 7 to be released and supposedly it is going to support visual markers with ARCore and ARKit.
  2. Engage CoreML / Computer Vision - in theory it is possible but I haven't seen many examples. I think it might be a bit difficult to start with (e.g. build and calibrate model).

However Apple have got it sorted: https://youtu.be/E2fd8igVQcU?t=2m58s

like image 127
Saico Avatar answered Sep 21 '22 03:09

Saico


if using Google Tango, you can implement this using the built in Area Descriptions File (ADF) system. The system has a holding screen and you are told to "walk around". Within a few seconds, you can relocalise to an area the device has previously been. (or pull the information from a server etc..)

Googles VPS (Visual Positioning Service) is a similar Idea, (closed Beta still) which should come to ARCore. It will, as far as I understand, allow you to localise a specific location using the camera feed from a global shared map of all scanned locations. I think, when released, it will try to fill the gap of an AR Cloud type system, which will solve these problems for regular developers.

See https://developers.google.com/tango/overview/concepts#visual_positioning_service_overview

The general problem of relocalising to a space using pre-knowledge of the space and camera feed only is solved in academia and other AR offerings, hololens etc... Markers/Tags aren't required. I'm unsure, however, which other commercial systems provide this feature.

like image 30
Jethro Avatar answered Sep 20 '22 03:09

Jethro