I have a set of images of a rigid body with some attached markers. I defined a coordinate system with origin in one of these markers and I want to get the rotation and translation between this coordinate system and the one defined at the camera's origin.
I tried for some time POSIT (following this) without ever getting acceptable results, until I realized that I had to calibrate the camera in first place. Based on this and using some images acquired with a calibration body, I got the camera intrinsics matrix and the distortion parameters. I also got a bad (?) re-projection error of 1.276.
Thanks.
It seems that the only way to get a decrease in the reprojection error (it changed from ~1.3 to ~0.7) is by setting to '0' the following parameters in the XML configuration file:
<Calibrate_FixAspectRatio> 0 </Calibrate_FixAspectRatio>
<Calibrate_AssumeZeroTangentialDistortion> 0 </Calibrate_AssumeZeroTangentialDistortion>
<Calibrate_FixPrincipalPointAtTheCenter> 0 </Calibrate_FixPrincipalPointAtTheCenter>
Using more images doesn't change the error and I'm still not sure if this new error is acceptable or not.
I used the values that the calibration gave as output (namely the focal length and the optical centre) in POSIT but the results were very similar to the ones I got when I was using pre-calibration values. I didn't use the distortion parameters because I don't know how to handle them in POSIT (could they make a difference in the results?).
The camera matrix I got after the calibration:
<Camera_Matrix type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
2.0613885351075501e+003 0. 2.2865517805186334e+002 0.
2.2546461307897816e+003 2.5261706201677623e+002 0. 0. 1.
</data>
</Camera_Matrix>
and the way I used it in POSIT:
#define FOCAL_LENGTH 2158.0175
#define FOCAL_LENGTH_X 2061.389
#define FOCAL_LENGTH_Y 2254.646
#define cX 203.655
#define cY 205.117
I calculated the difference between the new centre coordinates and the one for the calibration images and then applied that difference on the centre coordinates of the images on which I'm working with POSIT.
I'm using POSIT on two different images where theoretically I should get a rotation of 0 (for the first) and 10 (for the second) degrees between the model and the camera coordinate system. After I get the rotation matrix for each image, I defined a unit vector on the camera coordinate system and computed it on the model coordinate system by multypling it by the inverse of the two rotation matrices given by POSIT, getting two new vectors in the model coordinate system. When I calculate the angle between these two vectors in the model coordinate system, the output isn't as it should be - 10 degrees.
Does anyone have an idea where am I going wrong?
Lens distortion is one of the key factors affecting measurement accuracy. The goal of the distortion calibration is to find the transformation that maps the actual camera image plane onto an image following the perspective camera model. So far, there are many lens distortion models.
We can use the function, cv. calibrateCamera() which returns the camera matrix, distortion coefficients, rotation and translation vectors etc.
OpenCV doesn't provide distort function for image, but you can implement one by yourself. All you need are: Intrinsic params (camera matrix and distortion coefficients) and size of the distorted image. Denoted as cam_mtx , dis_cef , and image_size .
Which is the image resolution? The principal point should be close to the image center. which is the focal length of the lens? That calibration error could be fine depending on the precision that you need. In order to avoid the error introduced by the lens distortion you should undistort the image points passed to POSIT using undistortPoints. How big is this error depends on how big is the distortion of your lens.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With