I want to build a demo app in ARKit and I have some questions about what is currently possible with the beta (Apple has been calling this RealityKit, or ARKit 3.0).
The demo app I'm trying to build should do the following:
identify an object or image in the real environment, and create an anchor there
render a virtual model attached to the anchor
have the virtual model presented with occlusion
have the virtual model move along with the anchor image / object
I've tried adapting some code from earlier versions (ARKit 2.0 which leverages SceneKit), but certain features like people occlusion are not part of ARKit 2.0.
As Apple has been iterating on their beta, a lot of features advertised on their site and at WWDC 2019 have seemingly disappeared from the documentation for RealityKit (people occlusion, body tracking, world tracking).
The way I understand it, items (1) and (2) are possible with ARKit 2.0. Item (3) is advertised as possible with the beta, but I see little to no documentation.
Is this possible to do in the latest beta? If so, what is the best approach? If not, are there any workarounds like mixing the old and new APIs or something?
The rendering can be handled by any 2D/3D rendering engine, which includes SceneKit as you see below but majority of the apps will be using a 3D engine like Unreal or Unity.
Using the LiDAR sensor, ARKit scene reconstruction scans the environment to create mesh geometry representing the real world environment. Additionally, ARKit provides an optional classification of each triangle in the scanned mesh.
Step 1: If you don't already have one, you'll need to create an Apple ID. You can do that here. Step 2: Now you'll need to get Apple's developer tool Xcode, which enables ARKit to do its job. You need version 9 or higher.
RealityKit leverages information provided by the ARKit framework to seamlessly integrate virtual objects into the real world. Use RealityKit's rich functionality to create compelling augmented reality (AR) experiences.
All the challenges you mentioned, are accessible in ARKit
/SceneKit
and ARKit
/RealityKit
.
- Identify an object or image in the real environment, and create an anchor there.
You're able to identify 3D objects
or Images
using the following configs in ARKit:
let configuration = ARWorldTrackingConfiguration()
guard let obj = ARReferenceObject.referenceObjects(inGroupNamed: "Resources",
bundle: nil)
else { return }
configuration.detectionObjects = obj // Allows you create ARObjectAnchor
sceneView.session.run(configuration)
and:
let config = ARWorldTrackingConfiguration()
guard let img = ARReferenceImage.referenceImages(inGroupNamed: "Resources",
bundle: nil)
else { return }
config.detectionImages = img // Allows you create ARImageAnchor
config.maximumNumberOfTrackedImages = 3
sceneView.session.run(config)
However, if you want to implement a similar behaviour in RealityKit use this:
let objectAnchor = AnchorEntity(.object(group: "Resources", name: "object"))
and:
let imageAnchor = AnchorEntity(.image(group: "Resources", name: "model"))
- Render a virtual model attached to the anchor.
At the moment ARKit has four companions helping you render 2D and 3D graphics:
- Have the virtual model presented with occlusion.
In RealityKit module all the materials are structures that conform to Material protocol. At the moment there are 6 types of materials. You need OcclusionMaterial
.
Look at THIS POST to find out how to assign materials programmatically in RealityKit.
And THIS POST shows you how to assign custom occlusion material in SceneKit.
- Have a virtual model move along with an image/object anchor.
To implement this type of behavior in ARKit
+SceneKit
you have to use renderer(_:didAdd:for:) or session(_:didAdd:) methods. In RealityKit AnchorEntities are tracked automatically.
Here's an example of using ARObjectAnchor in renderer(_:didAdd:for:)
instance method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
if let _ = anchor as? ARObjectAnchor {
let text = SCNText(string: "ARKit", extrusionDepth: 0.5)
let textNode = SCNNode(geometry: text)
node.addChildNode(textNode)
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With