I am trying to save depth images from the iPhoneX TrueDepth camera. Using the AVCamPhotoFilter sample code, I am able to view the depth, converted to grayscale format, on the screen of the phone in real-time. I cannot figure out how to save the sequence of depth images in the raw (16 bits or more) format.
I have depthData
which is an instance of AVDepthData
. One of its members is depthDataMap
which is an instance of CVPixelBuffer
and image format type kCVPixelFormatType_DisparityFloat16
. Is there a way to save it to the phone to transfer for offline manipulation?
The TrueDepth camera captures accurate face data by projecting and analysing thousands of invisible dots to create a depth map of your face. It also captures an infrared image of your face.
CONCLUSIONS. The TrueDepth sensor of the iPhone X is suitable for distance measurements according to the investigations carried out and the results obtained. The errors are in the millimeter range and are a maximum of 5 % of the target distance.
The TrueDepth camera produces disparity maps by default so that the resulting depth data is similar to that produced by a dual camera device. However, unlike a dual camera device, the TrueDepth camera can directly measure depth (in meters) with AVDepthData.Accuracy.absolute accuracy.
It has been noted that setting up Face in Airplane helped the users to fix Truedepth camera issues. Here’s how to do it. 1. Select the Face ID & Passcode icon after navigating to Settings. 2. Click the “Delete Face” icon after hitting Face ID.
However, unlike a dual camera device, the TrueDepth camera can directly measure depth (in meters) with AVDepthData.Accuracy.absolute accuracy. To capture depth instead of disparity, set the activeDepthDataFormat of the capture device before starting your capture session:
On iOS devices with a back-facing dual camera or a front-facing TrueDepth camera, the capture system can record depth information. A depth map is like an image; however, instead of each pixel providing a color, it indicates distance from the camera to that part of the image (either in absolute terms, or relative to other pixels in the depth map).
There's no standard video format for "raw" depth/disparity maps, which might have something to do with AVCapture not really offering a way to record it.
You have a couple of options worth investigating here:
Convert depth maps to grayscale textures (which you can do using the code in the AVCamPhotoFilter sample code), then pass those textures to AVAssetWriter
to produce a grayscale video. Depending on the video format and grayscale conversion method you choose, other software you write for reading the video might be able to recover depth/disparity info with sufficient precision for your purposes from the grayscale frames.
Anytime you have a CVPixelBuffer
, you can get at the data yourself and do whatever you want with it. Use CVPixelBufferLockBaseAddress
(with the readOnly
flag) to make sure the content won't change while you read it, then copy data from the pointer CVPixelBufferGetBaseAddress
provides to wherever you want. (Use other pixel buffer functions to see how many bytes to copy, and unlock the buffer when you're done.)
Watch out, though: if you spend too much time copying from buffers, or otherwise retain them, they won't get deallocated as new buffers come in from the capture system, and your capture session will hang. (All told, it's unclear without testing whether a device has the memory & I/O bandwidth for much recording this way.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With