Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Area learning after Google Tango

Area learning was a key feature of Google Tango which allowed a Tango device to locate itself in a known environment and save/load a map file (ADF).

Since then Google has announced that it's shutting down Tango and putting its effort into ARCore, but I don't see anything related to area learning in ARCore documentation.

What is the future of area learning on Android ? Is it possible to achieve that on a non-Tango / ARCore-enabled device ?

like image 946
sdabet Avatar asked Jan 30 '18 19:01

sdabet


People also ask

Why did Google tango fail?

And today, Google made that official. It pulled down the Project Tango website, and on Tango's Twitter account, it wrote that Tango “will be deprecated” and “will not be supported by Google” after March 1st, 2018. It specifically called out ARCore as the reason for its demise.

What are the core technologies of Project Tango?

The goal is to give mobile devices a human-like understanding of space and motion by using three core technologies: motion tracking, area learning, and depth perception. This means that a Tango device can track its own movement and orientation through 3D space.

What is Tango platform?

Tango is an augmented reality computing platform developed by Google. It uses computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals.


1 Answers

Currently, Tango's area learning is not supported by ARCore and ARCore's offerings are not nearly as functional. First, Tango was able to take precise measurements of the surroundings, whereas ARCore is using mathematical models to make approximations. Currently, the ARCore modeling is nowhere near competitive with Tango's measurement capabilities; it appears to only model certain flat surfaces at the moment. [1]

Second, the area learning on Tango allowed the program to access previously captured ADF files, but ARCore does not currently support this -- meaning that the user has to hardcode the initial starting position. [2]

Google is working on a Visual Positioning Service that would live in the cloud and allow a client to compare local point maps with ground truth point maps to determine indoor position [3]. I suspect that this functionality will only work reliably if the original point map is generated using a rig with a depth sensor (ie. not in your own house with your smartphone), although mobile visual SLAM has had some success. This also seems like a perfect task for deep learning, so there might be robust solutions on the horizon.[4]

[1] ARCore official docs https://developers.google.com/ar/discover/concepts#environmental_understanding

[2] ARCore, ARKit: Augmented Reality for everyone, everywhere! https://www.cologne-intelligence.de/blog/arcore-arkit-augmented-reality-for-everyone-everywhere/

[3] Google 'Visual Positioning Service' AR Tracking in Action https://www.youtube.com/watch?v=L6-KF0HPbS8

[4] Announcing the Matterport3D Research Dataset. https://matterport.com/blog/2017/09/20/announcing-matterport3d-research-dataset/

like image 72
Aidan Hoolachan Avatar answered Nov 16 '22 02:11

Aidan Hoolachan