I'm using an ARSession
combined with an ARFaceTrackingConfiguration
to track my face. At the same time, I would like to record a video from the front facing camera of my iPhone X. To do so I'm using AVCaptureSession
but as soon as I start recording, the ARSession
gets interrupted.
These are two snippets of code:
// Face tracking
let configuration = ARFaceTrackingConfiguration()
configuration.isLightEstimationEnabled = false
let session = ARSession()
session.run(configuration, options: [.removeExistingAnchors, .resetTracking])
// Video recording
let camera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)!
input = try! AVCaptureDeviceInput(device: camera)
session.addInput(input)
session.addOutput(output)
Does anybody know how to do the two things at the same time? Apps like Snapchat allow users to record and use the True Depth sensor at the same time so I imagine what I'm asking is perfectly feasible. Thanks!
Unity TechnologiesIt is possible to access both the front and rear camera at the same time, but not through ARKit.
Camera access is usually limited to apps running in full-screen mode. If your app enters a multitasking mode like Split View, the system disables the camera. Starting in iOS 16, your app can use the camera while multitasking by setting the isMultitaskingCameraAccessEnabled property to true on supported systems.
The facial tracking capability enabled by iPhone has proven its accuracy and performance with its entertainingly impressive Animojis. Due to the built-in TrueDepth Camera, iPhone's face tracking is highly accurate under most lighting conditions, providing a solid facial motion capture data source.
ARKit runs its own AVCaptureSession
, and there can be only one capture session running at a time — if you run a capture session, you preempt ARKit’s, which prevents ARKit from working.
However, ARKit does provide access to the camera pixel buffers it receives from its capture session, so you can record video by feeding those sample buffers to an AVAssetWriter
. (It’s basically the same workflow you’d use when recording video from AVCaptureVideoDataOutput
... a lower-level way of doing video recording compared to AVCaptureMovieFileOutput
.)
You can also feed the ARKit camera pixel buffers (see ARFrame.capturedImage
) to other technologies that work with live camera imagery, like the Vision framework. Apple has a sample code project demonstrating such usage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With