I'm exploring the new ARKit.
Currently all the examples I've seen use the "relative" position of the camera to place objects onto the scene around the origin.
Suppose I have absolute real-world GPS coordinates that I'd like to place as markers in to the scene. How would I go about doing that?
There are some demos and examples starting to pop up that actually do this, but I haven't seen any code or explanation so far.
ARKit on iOS 11 + CoreLocation Demo — GPS with virtual guidance
Any examples would be greatly appreciated.
Based on a particular GPS coordinate, ARKit downloads batches of imagery that depict the physical environment in that area and assist the session with determining the user's precise geographic location.
Overview. On a fourth-generation iPad Pro running iPad OS 13.4 or later, ARKit uses the LiDAR Scanner to create a polygonal model of the physical environment.
Improved tracking. ARKit tends to perform better than ARCore in terms of image tracking and recognition. If you intend to create AR apps that track user gestures to manipulate on-screen images, ARKit will usually be the more efficient option. It translates movements into data faster than Google's alternative.
Ok, so a few days later the author of that video released the source on GitHub and it available to everyone to enjoy. There are some things to sort out, such as true north position, but it's a good start
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With