I would really like some guidance on combining Apple's new Vision API with ARKit in a way that enables object recognition. This would not need to track the moving object, just recognize it stable in 3d space for the AR experience to react accordingly.
I know this type of experience is available in services like Vuforia
or Wikitude
, but I would like to try it with only native Apple APIs.
You don't necessarily need to use Vision framework itself inside your project, because ARKit is already has this feature. All you need is to activate a detectionObjects instance property that you can use right from iOS 12:
var detectionObjects: Set<ARReferenceObject> { get set }
Let's see what Apple documentation says about it:
Use this property to choose known 3D objects for ARKit to find in the user's environment and present as ARObjectAnchor for use in your AR experience.
Here's a working code (as simple as that) :
import ARKit
class ViewController: UIViewController {
@IBOutlet var sceneView: ARSCNView!
let config = ARWorldTrackingConfiguration()
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
// add reference objects into Resources in your project
guard let objects = ARReferenceObject.referenceObjects(inGroupNamed: "Resources",
bundle: nil)
else {
fatalError("No reference here!")
return
}
config.detectionObjects = objects
sceneView.session.run(config)
}
}
And, of course, insert an extension with renderer() method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
if let _ = anchor as? ARObjectAnchor { // Special Type of anchor
let sphereNode = SCNNode(geometry: SCNSphere(radius: 0.05))
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
node.addChildNode(sphereNode)
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With