Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does ARKit consider Lens Distortion in iPhone and iPad?

ARKit updates many intrinsic (and extrinsic) parameters of the ARCamera from frame to frame. I'd like to know if it also takes Radial Lens Distortion into consideration (like in AVCameraCalibrationData class that ARKit doesn't use), and fix the video frames' distortion appropriately (distort/undistort operations) for back iPhone and iPad cameras?

var intrinsics: simd_float3x3 { get }

enter image description here

As we all know, the Radial Lens Distortion greatly affects the 6 DOF pose estimation accuracy when we place undistorted 3D objects in distorted by a lens real world scene.

enter image description here

var lensDistortionLookupTable: Data? { get } 

/* A map of floating-point values describing radial */
/* lens distortions in AVCameraCalibrationData class */

If Lens Distortion's math in ARKit is available in API, where I can find it?

like image 425
Andy Jazz Avatar asked Aug 26 '18 11:08

Andy Jazz


People also ask

Does iPhone have lens distortion?

The ultra-wide lens is known to cause distortion at the edges of the frame, so people and objects look slightly warped. To compensate for the skewed edges, Apple incorporated "Lens Correction," but that fix isn't perfect either. Like many under-the-radar iOS features, Lens Correction is enabled by default.

How do you get rid of lens distortion on iPhone?

On iPhone 12 models, iPhone 13 models, and iPhone 14 models, the Lens Correction setting adjusts photos taken with the front camera or Ultra Wide camera for more natural-looking results. Lens Correction is on by default. To turn off Lens Correction, go to Settings > Camera, then turn off Lens Correction.

Does iPhone distort images?

wide-angle lenses (as used in iPhones) can cause a barrel distortion effect on images taken at a close distance.

Does ARKit use LiDAR?

Using the LiDAR sensor, ARKit scene reconstruction scans the environment to create mesh geometry representing the real world environment. Additionally, ARKit provides an optional classification of each triangle in the scanned mesh.


1 Answers

Although it's not explicitly stated, I'm certain that ARKit is correcting for non-linear lens distortion. Lens distortion (and inverse distortion) lookup tables exist in iOS11 and are available via AVCameraCalibrationData, but they are not exposed by ARKit, presumably because there is no need for them since you're already working with rectified coordinates.

Whether or not the distortion model parameters are the same for each device model (i.e. exact same values for each iPhone 7) it's an interesting question. I don't have access to multiple phones of the same model, but this shouldn't be hard to figure out for someone who does.

source

As an exapmple from : https://github.com/verebes1/ARKit-Multiplayer QR marker detection Image

With the help of Apple's Vision now it's possible to recognize QR marker in camera's videofeed and track it while it is in the field of view. The framework provides us the coordinates of the QR marker square corners in the screen's coordinate system.

QR marker pose estimation

The next thing you probably want to do after detecting the QR markers is to obtain the camera pose from them.

To perform QR marker pose estimation you need to know the calibration parameters of your camera. This is the camera matrix and distortion coefficients. Each camera lens has unique parameters, such as focal length, principal point, and lens distortion model. The process of finding intrinsic camera parameters is called camera calibration. The camera calibration process is important for Augmented Reality applications because it describes the perspective transformation and lens distortion on an output image. To achieve the best user experience with Augmented Reality, visualization of an augmented object should be done using the same perspective projection.

At the end, what you get after the calibration is the camera matrix: a matrix of 3x3 elements with the focal distances and the camera center coordinates (a.k.a intrinsic parameters), and the distortion coefficients: a vector of 5 elements or more that models the distortion produced by your camera. The calibration parameters are pretty the same for most of iDevices.

With the precise location of marker corners, we can estimate a transformation between our camera and a marker in 3D space. This operation is known as pose estimation from 2D-3D correspondences. The pose estimation process finds an Euclidean transformation (that consists only of rotation and translation components) between the camera and the object.

Vector

The C is used to denote the camera center. The P1-P4 points are 3D points in the world coordinate system and the p1-p4 points are their projections on the camera's image plane. Our goal is to find relative transformation between a known marker position in the 3D world (p1-p4) and the camera C using an intrinsic matrix and known point projections on image plane (P1-P4).

OpenCV functions are used to calculate the QR marker transformation in such a way that it minimizes the reprojection error, that is the sum of squared distances between the observed projection's imagePoints and the projected objectPoints. The estimated transformation is defined by rotation (rvec) and translation components (tvec). This is also known as Euclidean transformation or rigid transformation. At the end we get rotation quaternion and a translation matrix of the QR marker.

Integration into Apple's ARKit

The final part is the integration of all the information about QR marker's pose into the 3D scene created by ARKit. ARKit uses Visual Inertial Odometry (VIO) to accurately track the world around it. VIO fuses camera sensor data with CoreMotion data. These two inputs allow the device to sense how it moves within a room with a high degree of accuracy, and without any additional calibration. All the rendering stuff is based on Apple's Metal and Apple's SceneKit above it.

In order to render SceneKit's node on our QR marker in a proper way we need to create a model matrix of our QR marker from the quaternion and translation matrix we've got from OpenCV. The next step is to multiply QR marker's model matrix by SceneKit scene virtual camera's transform matrix. As a result, we can see a custom node (Axes node in our project) that repeats all the QR marker's movements in the real world while it's in the field of view of the iPhone's camera and if it is not - it stays on the last updated position so we can examine it around.

Move

like image 189
AD Progress Avatar answered Oct 22 '22 13:10

AD Progress