I have a pretty straightforward ARKit demo app in which I place a few objects into view with labeling. I'm currently using the snapshot()
function on ARSCNView
to get a screenshot of the video input with all my rendered objects in view.
However, I'd like to get a UIImage of just the raw camera input, without any extra rendered objects on top of that, preferably in the same resolution as what's displayed on the screen. I'm not sure what the best way to do that would be. Is there a way to strip the rendered content? should I tie into the camera feed directly?
Currently it looks a little something like this:
class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate {
@IBOutlet weak var sceneView: ARSCNView!
var nodeModel:SCNNode!
...
override func viewDidLoad() {
let scene = SCNScene()
sceneView.scene = scene
let modelScene = SCNScene(named: ...
nodeModel = modelScene?.rootNode.childNode(...
...
self.view.addSubview( ...
}
...some ARKit stuff...some model stuff...
}
extension ViewController {
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
if self.captureButton.capturing && time > self.captureTime {
captureScreenshot(scene)
self.captureTime = time + TimeInterval(0.1)
}
}
func captureScreenshot(_ scene: SCNScene){
DispatchQueue.main.async() {
let screenshot = self.sceneView.snapshot()
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil)
}
}
}
Implement the optional ARSessionDelegate
method:
func session(_ session: ARSession, didUpdate frame: ARFrame)
ARFrame
has a property captureImage
which is a CVPixelBuffer
which you can then convert to a UIImage
like so:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
guard let cgImage = cgImage else { return nil }
self.init(cgImage: cgImage)
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With