So I understand that in order to track images, we need to create a AR Resource Folder and place all the images we intend to track there, as well as configuring thru the inspector their real world size properties.
Then we set the array of ARReferenceImages to the Session's World Config.
All good with that. But HOW MANY can we track ? 10? 100? 1000000? and would it be possible to download those images and create ARReferences on the fly, instead of having them in the bundle from the very beginning ?
AR image recognition typically uses an AR app where learners scan real-world 2D images and overlay 2D video, text, pictures, or 3D objects on it. Image recognition is a marker-based AR technology. Think of a marker as an anchor that connects virtual content with the real-world.
ARKit recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data. The result is a high-precision model of the device's position and motion.
ARKit combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience. You can create many kinds of AR experiences with these technologies using the front or rear camera of an iOS device.
- ARKit enables developers to create AR apps It subsequently announced a second version in iOS 12 and basically releases a new version each time it updates iOS. There is a Google equivalent for Android phones, called ARCore.
Having a look at the Apple Docs
it doesn't seem to specify a limit. As such it is likely to assume it would likely depend on memory management etc.
Regarding creating images on the fly, this is definitely possible.
According to the docs this can be done one of two ways:
Creating a a new reference image from a Core Graphics image object:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Creating a new reference image from a Core Video pixel buffer:
init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Here is an example of creating a referenceImage
on the fly using an image from the standard Assets Bundle
, although this can easily be adapted for parsing an image from a URL
etc:
// Create ARReference Images From Somewhere Other Than The Default Folder
func loadDynamicImageReferences(){
//1. Get The Image From The Folder
guard let imageFromBundle = UIImage(named: "moonTarget"),
//2. Convert It To A CIImage
let imageToCIImage = CIImage(image: imageFromBundle),
//3. Then Convert The CIImage To A CGImage
let cgImage = convertCIImageToCGImage(inputImage: imageToCIImage)else { return }
//4. Create An ARReference Image (Remembering Physical Width Is In Metres)
let arImage = ARReferenceImage(cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.2)
//5. Name The Image
arImage.name = "CGImage Test"
//5. Set The ARWorldTrackingConfiguration Detection Images Assuming A Configuration Is Running
configuration.detectionImages = [arImage]
}
/// Converts A CIImage To A CGImage
///
/// - Parameter inputImage: CIImage
/// - Returns: CGImage
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
We can then test this within ARSCNViewDelegate
e.g.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
As you can see the process if fairly easy. So in your case, you are probably more interested in the conversion function above which uses this method to create the dynamic images:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With