I'm receiving depth images of a tof camera
via MATLAB
. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV
function, which are implemented in MATLAB
via mex-files.
But later on I can't use those drivers anymore nor use openCV
functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV
function.
... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.
greetings!
The undistort function takes in a distorted image, our camera matrix, and distortion coefficients and it returns an undistorted, often called destination image.
Camera Calibration is nothing but estimating the parameters of a camera, parameters about the camera are required to determine an accurate relationship between a 3D point in the real world and its corresponding 2D projection (pixel) in the image captured by that calibrated camera.
k_1,k_2,k_3,k_4 are the distortion coefficients. (u_d,v_d) is the coordinate of a point in the distorted image. (u,v) is the coordinate of a point in the corrected image. D(\theta) defines the mapping from point angle and pixel distance to image center.
The equations of the projection of a 3D point [X; Y; Z]
to a 2D image point [u; v]
are provided on the documentation page related to camera calibration :
(source: opencv.org)
In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints
, with tuned parameters making the optimization fast but a little inaccurate.
However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.
Basically, the algorithm behind function undistort
is the following. For each pixel of the destination lens-corrected image do:
(u_dst, v_dst)
to normalized coordinates (x', y')
using the inverse of the calibration matrix K
,(x'', y'')
,(x'', y'')
to distorted pixel coordinates (u_src, v_src)
using the calibration matrix K
,(u_src, v_src)
in the source image, and assign this intensity/depth to the current destination pixel.Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With