I want to use opencv Cameracalibration function (calibrateCamera()
) to calibrate my camera.
According to opencv document, I have to take at least 10 shots of my chessboard.
But when I want to calculate the objectPoints
(here, the location of chessboard inner points), I confuse:
If the origin is the camera and the chessboard is moving, it's easy for me to understand the theoretic base of the concept but hard to calculate the objectPoints
.
The second way is to fix the chessboard and move the camera.But in this solution, I don't understand how to apply the distance of camera from the chessboard in calculating objectPoints
or any other way to say to opencv camera calibration function this changing in distance and direction.
I'll be appreciated if you could help me to solve my problem.
The second approach you mentionned is the most popular because it is very easy to use.
Let's say you have the following (9,6)
chessboard, where the side of a square is of length a
:
(source: opencv.org)
Then, you simply define your object points as follows:
// 3D coordinates of chessboard points
std::vector<cv::Point3f> objectPoints;
for(int y=0; y<6; ++y) {
for(int x=0; x<9; ++x)
objectPoints.push_back(cv::Point3f(x*a,y*a,0));
}
// One vector of chessboard points for each chessboard image
std::vector<std::vector<cv::Point3f>> arrayObjectPoints;
for(int n=0; n<number_images; ++n)
arrayObjectPoints.push_back(objectPoints);
Basically, since you may choose the 3D coordinate system as you wish, you can choose to use the chessboard coordinate system which makes the object points very easy to define. Then, the calibrateCamera
function will take care of estimating one R,t (relative orientation and translation with respect to the chosen coordinate system) for each image, and one intrinsics matrix K and distortion coefficients D common for all images.
Also, take care of using the same order for the 2D points.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With