The Project Tango C API documentation says that the TANGO_CALIBRATION_POLYNOMIAL_3_PARAMETERS
lens distortion is modeled as:
x_corr_px = x_px (1 + k1 * r2 + k2 * r4 + k3 * r6) y_corr_px = y_px (1 + k1 * r2 + k2 * r4 + k3 * r6)
That is, the undistorted coordinates are a power series function of the distorted coordinates. There is another definition in the Java API, but that description isn't detailed enough to tell which direction the function maps.
I've had a lot of trouble getting things to register properly, and I suspect that the mapping may actually go in the opposite direction, i.e. the distorted coordinates are a power series of the undistorted coordinates. If the camera calibration was produced using OpenCV, then the cause of the problem may be that the OpenCV documentation contradicts itself. The easiest description to find and understand is the OpenCV camera calibration tutorial, which does agree with the Project Tango docs:
But on the other hand, the OpenCV API documentation specifies that the mapping goes the other way:
My experiments with OpenCV show that its API documentation appears correct and the tutorial is wrong. A positive k1
(with all other distortion parameters set to zero) means pincushion distortion, and a negative k1
means barrel distortion. This matches what Wikipedia says about the Brown-Conrady model and will be opposite from the Tsai model. Note that distortion can be modeled either way depending on what makes the math more convenient. I opened a bug against OpenCV for this mismatch.
So my question: Is the Project Tango lens distortion model the same as the one implemented in OpenCV (documentation notwithstanding)?
Here's an image I captured from the color camera (slight pincushioning is visible):
And here's the camera calibration reported by the Tango service:
distortion = {double[5]@3402}
[0] = 0.23019999265670776
[1] = -0.6723999977111816
[2] = 0.6520439982414246
[3] = 0.0
[4] = 0.0
calibrationType = 3
cx = 638.603
cy = 354.906
fx = 1043.08
fy = 1043.1
cameraId = 0
height = 720
width = 1280
Here's how to undistort with OpenCV in python:
>>> import cv2
>>> src = cv2.imread('tango00042.png')
>>> d = numpy.array([0.2302, -0.6724, 0, 0, 0.652044])
>>> m = numpy.array([[1043.08, 0, 638.603], [0, 1043.1, 354.906], [0, 0, 1]])
>>> h,w = src.shape[:2]
>>> mDst, roi = cv2.getOptimalNewCameraMatrix(m, d, (w,h), 1, (w,h))
>>> dst = cv2.undistort(src, m, d, None, mDst)
>>> cv2.imwrite('foo.png', dst)
And that produces this, which is maybe a bit overcorrected at the top edge but much better than my attempts with the reverse model:
The Tango C-API Docs state that (x_corr_px, y_corr_px)
is the "corrected output position". This corrected output position needs to then be scaled by focal length and offset by center of projection to correspond to a distorted pixel coordinates.
So, to project a point onto an image, you would have to:
r2 = x*x + y*y
)Compute (x_corr_px, y_corr_px)
based on the mentioned equations:
x_corr_px = x (1 + k1 * r2 + k2 * r4 + k3 * r6)
y_corr_px = y (1 + k1 * r2 + k2 * r4 + k3 * r6)
Compute distorted coordinates
x_dist_px = x_corr_px * fx + cx
y_dist_px = y_corr_px * fy + cy
Draw (x_dist_px, y_dist_px)
on the original, distorted image buffer.
This also means that the corrected coordinates are the normalized coordinates scaled by a power series of the normalized image coordinates' magnitude. (this is the opposite of what the question suggests)
Looking at the implementation of cvProjectPoints2
in OpenCV (see [opencv]/modules/calib3d/src/calibration.cpp), the "Poly3" distortion in OpenCV is being applied the same direction as in Tango. All 3 versions (Tango Docs, OpenCV Tutorials, OpenCV API) are consistent and correct.
Good luck, and hopefully this helps!
(Update: Taking a closer look at a the code, it looks like the corrected coordinates and distorted coordinates are not the same. I've removed the incorrect parts of my response, and the remaining parts of this answer are still correct.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With