I'm trying to understand and use ARKit. But there is one thing that I cannot fully understand.
Apple said about ARAnchor:
A real-world position and orientation that can be used for placing objects in an AR scene.
But that's not enough. So my questions are:
ARAnchor
exactly? ARAnchor
just part of feature points? Updated: September 27, 2022.
ARAnchor
is an invisible null-object that holds a 3D model at anchor's position. Think of ARAnchor
just like it's a parent transform node
with local axis (you can translate, rotate and scale it). Every 3D model has a pivot point, right? So this pivot point must meet an ARAnchor
in ARKit.
If you're not using anchors in ARKit
app (in RealityKit
it's impossible not to use anchors because they are part of a scene), your 3D models may drift from where they were placed, and this will dramatically impact app’s realism and user experience. Thus, anchors are crucial elements of any AR scene.
According to ARKit 2017 documentation:
ARAnchor
is a real-world position and orientation that can be used for placing objects in AR Scene. Adding an anchor to the session helps ARKit to optimize world-tracking accuracy in the area around that anchor, so that virtual objects appear to stay in place relative to the real world. If a virtual object moves, remove the corresponding anchor from the old position and add one at the new position.
ARAnchor
is a parent class of other 10 anchors' types in ARKit, hence all those subclasses inherit from ARAnchor
. Usually you do not use ARAnchor
directly. I must also say that ARAnchor
and Feature Points
have nothing in common. Feature Points
are rather special visual elements for tracking and debugging.
ARAnchor
doesn't automatically track a real world target. When you need automation, you have to use renderer()
or session()
instance methods that can be implemented in case you comformed to ARSCNViewDelegate
or ARSessionDelegate
protocols, respectively.
Here's an image with visual representation of plane anchor. Keep in mind: you can neither see a detected plane nor its corresponding ARPlaneAnchor
, by default. So, if want to see the anchor in your scene, you may "visualize" it using three thin SCNCylinder
primitives. Each color of the cylinder represents a particular axis: so RGB is XYZ.
In ARKit you can automatically add ARAnchors
to your scene using different scenarios:
ARPlaneAnchor
planeDetection
instance property is ON
, ARKit is able to add ARPlaneAnchors in the running session. Sometimes enabled planeDetection
considerably increases a time required for scene understanding stage.ARImageAnchor (conforms to ARTrackable
protocol)
detectionImages
instance property. In ARKit 2.0 you can totally track up to 25 images, in ARKit 3.0 / 4.0 – up to 100 images, respectively. But, in both cases, not more than just 4 images simultaneously. However, it was promised, that in ARKit 5.0 / 6.0, you can detect and track up to 100 images at a time (but it's still not implemented yet).ARBodyAnchor (conforms to ARTrackable
protocol)
ARBodyTrackingConfig()
. You'll get ARBodyAnchor at a Root Joint
of CG Skeleton or, in other words, at pelvis position of a tracked character.ARFaceAnchor (conforms to ARTrackable
protocol)
ARFaceAnchor
with a help of the front TrueDepth camera. When face is detected, Face Anchor will be attached slightly behind a nose, in the center of a face. In ARKit 2.0 you can track just one face, in ARKit 3.0 and higher – up to 3 faces, simultaneously. However, the number of tracked faces depends on presence of a TrueDepth sensor and processor version: gadgets with TrueDepth camera can track up to 3 faces, gadgets with A12+ chipset, but without TrueDepth camera, can also track up to 3 faces.ARObjectAnchor
ARReferenceObject
instances for detectionObjects
property of session config.AREnvironmentProbeAnchor
ARParticipantAnchor
true
value for isCollaborationEnabled
property in ARWorldTrackingConfig. Then import MultipeerConnectivity
framework.ARMeshAnchor
30-40 anchors
or even more. This is due to the fact that each classified object (wall, chair, door or table) has its own personal anchor. Each ARMeshAnchor stores data about corresponding vertices, one of eight cases of classification, its faces and vertices' normals.ARGeoAnchor (conforms to ARTrackable
protocol)
ARAppClipCodeAnchor (conforms to ARTrackable
protocol)
There are also other regular approaches to create anchors in AR session:
Hit-Testing methods
ARHitTestResult
class and its corresponding hit-testing methods for ARSCNView and ARSKView will be deprecated in iOS 14, so you have to get used to a Ray-Casting.Ray-Casting methods
Feature Points
ARCamera's transform
Any arbitrary World Position
world anchor
like AnchorEntity(.world(transform: mtx))
found in RealityKit.This code snippet shows you how to use an ARPlaneAnchor in a delegate's method: renderer(_:didAdd:for:)
:
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor
else { return }
let grid = Grid(anchor: planeAnchor)
node.addChildNode(grid)
}
AnchorEntity is alpha and omega in RealityKit. According to RealityKit documentation 2019:
AnchorEntity
is an anchor that tethers virtual content to a real-world object in an AR session.
RealityKit framework and Reality Composer app were announced at WWDC'19. They have a new class named AnchorEntity
. You can use AnchorEntity as the root point of any entities' hierarchy, and you must add it to the Scene anchors collection. AnchorEntity automatically tracks real world target. In RealityKit and Reality Composer AnchorEntity
is at the top of hierarchy. This anchor is able to hold a hundred of models and in this case it's more stable than if you use 100 personal anchors for each model.
Let's see how it looks in a code:
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let modelAnchor = try! Experience.loadModel()
arView.scene.anchors.append(modelAnchor)
return arView
}
AnchorEntity
has three components:
To find out the difference between
ARAnchor
andAnchorEntity
look at THIS POST.
Here are nine AnchorEntity's cases available in RealityKit 2.0 for iOS:
// Fixed position in the AR scene
AnchorEntity(.world(transform: mtx))
// For body tracking (a.k.a. Motion Capture)
AnchorEntity(.body)
// Pinned to the tracking camera
AnchorEntity(.camera)
// For face tracking (Selfie Camera config)
AnchorEntity(.face)
// For image tracking config
AnchorEntity(.image(group: "GroupName", name: "forModel"))
// For object tracking config
AnchorEntity(.object(group: "GroupName", name: "forObject"))
// For plane detection with surface classification
AnchorEntity(.plane([.any], classification: [.seat], minimumBounds: [1, 1]))
// When you use ray-casting
AnchorEntity(raycastResult: myRaycastResult)
// When you use ARAnchor with a given identifier
AnchorEntity(.anchor(identifier: uuid))
// Creates anchor entity on a basis of ARAnchor
AnchorEntity(anchor: arAnchor)
And here are only two AnchorEntity's cases available in RealityKit 2.0 for macOS:
// Fixed world position in VR scene
AnchorEntity(.world(transform: mtx))
// Camera transform
AnchorEntity(.camera)
Also it’s not superfluous to say that you can use any subclass of
ARAnchor
forAnchorEntity
needs:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
arView.session.add(anchor: faceAnchor)
self.anchor = AnchorEntity(anchor: faceAnchor)
anchor.addChild(model)
arView.scene.anchors.append(self.anchor)
}
At the moment (February 2022) Reality Composer has just 4 types of AnchorEntities:
// 1a
AnchorEntity(plane: .horizontal)
// 1b
AnchorEntity(plane: .vertical)
// 2
AnchorEntity(.image(group: "GroupName", name: "forModel"))
// 3
AnchorEntity(.face)
// 4
AnchorEntity(.object(group: "GroupName", name: "forObject"))
And of course, I should say a few words about preliminary anchors. There are 3 preliminary anchoring types (July 2022) for those who prefer Python scripting for USDZ models – these are plane
, image
and face
preliminary anchors. Look at this code snippet to find out how to implement a schema pythonically.
def Cube "ImageAnchoredBox"(prepend apiSchemas = ["Preliminary_AnchoringAPI"])
{
uniform token preliminary:anchoring:type = "image"
rel preliminary: imageAnchoring:referenceImage = <ImageReference>
def Preliminary_ReferenceImage "ImageReference"
{
uniform asset image = @somePicture.jpg@
uniform double physicalWidth = 45
}
}
If you want to know more about AR USD Schemas, read this story on Meduim.
Here's an example of how to visualize anchors in RealityKit (mac version).
import AppKit
import RealityKit
class ViewController: NSViewController {
@IBOutlet var arView: ARView!
var model = Entity()
let anchor = AnchorEntity()
fileprivate func visualAnchor() -> Entity {
let colors: [SimpleMaterial.Color] = [.red, .green, .blue]
for index in 0...2 {
let box: MeshResource = .generateBox(size: [0.20, 0.005, 0.005])
let material = UnlitMaterial(color: colors[index])
let entity = ModelEntity(mesh: box, materials: [material])
if index == 0 {
entity.position.x += 0.1
} else if index == 1 {
entity.transform = Transform(pitch: 0, yaw: 0, roll: .pi/2)
entity.position.y += 0.1
} else if index == 2 {
entity.transform = Transform(pitch: 0, yaw: -.pi/2, roll: 0)
entity.position.z += 0.1
}
model.scale *= 1.5
self.model.addChild(entity)
}
return self.model
}
override func awakeFromNib() {
anchor.addChild(self.visualAnchor())
arView.scene.addAnchor(anchor)
}
}
At the end of my post, I would like to talk about four types of anchors that are used in ARCore 1.33. Google's official documentation says the following about anchors: "ArAnchor describes a fixed location and orientation in the real world". ARCore anchors work similarly to ARKit anchors.
Let's take a look at ArAnchors' types:
Local anchors
Cloud Anchors
Persistent Cloud Anchors API
, you can create a cloud anchor that can be resolved for one day to 365 days after creation). They can be resolved by multiple users to establish a common frame of reference across users and their devices.Geospatial anchors
Visual Positioning System
data, to provide precise location almost anywhere in the world; these anchors may be shared between app instances. The user may place an anchor from a remote location as long as the app is connected to the internet and able to use the VPS.Terrain anchors
When anchoring objects in ARCore, make sure that they are close to the anchor you are using. Avoid placing objects farther than 8 meters from the anchor to prevent unexpected rotational movement due to ARCore's updates to world space coordinates. If you need to place an object more than eight meters away from an existing anchor, create a new anchor closer to this position and attach the object to the new anchor.
These Kotlin code snippets show you how to use a Geospatial anchor:
fun configureSession(session: Session) {
session.configure(
session.config.apply {
geospatialMode = Config.GeospatialMode.ENABLED
}
)
}
val earth = session?.earth ?: return
if (earth.trackingState != TrackingState.TRACKING) { return }
earthAnchor?.detach()
val altitude = earth.cameraGeospatialPose.altitude - 1
val qx = 0f; val qy = 0f; val qz = 0f; val qw = 1f
earthAnchor = earth.createAnchor(latLng.latitude,
latLng.longitude,
altitude,
qx, qy, qz, qw)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With