I'm working on an OpenCV project, and I'm on to calibration. I believe I've implemented the code correctly; however I'm getting different values for the camera matrix, sometimes wildly varying. After 6 repetitions of showing the calibration pattern 10 times, I get (decimals truncated for clarity):
[573, 0, 386;
0, 573, 312;
0, 0, 1]
[642, 0, 404;
0, 644, 288;
0, 0, 1]
[664, 0, 395;
0, 665, 272;
0, 0, 1]
[629, 0, 403;
0, 630, 288;
0, 0, 1]
[484, 0, 377;
0, 486, 307;
0, 0, 1]
[644, 0, 393;
0, 643, 289;
0, 0, 1]
These values differ by unacceptable amounts. I need to know to a decent degree of accuracy what the given parameters are. What is typically the cause of these large inaccuracies and how can I evaluate the correctness of a given matrix? It seems to depend on the variety of distances and orientations I show the pattern from but I can't make sense of the pattern.
using namespace cv;
using namespace std;
int main(int, char**)
{
VideoCapture cap(1);
if(!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH,800);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,600);
Mat edges;
Size size(9,17);
int counter = 10;
vector<Point2f> corners;
bool found;
vector<Point3f> chess = fr::ChessGen::getBoard(size,1,true);
vector<vector<Point3f> > objectPoints;
vector<vector<Point2f> > imagePoints;
Mat camera = Mat::eye(3,3,CV_64F);
Mat distortion = Mat::zeros(8, 1, CV_64F);
vector<Mat > rvecs;
vector<Mat > tvecs;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame;
cvtColor(frame, edges, CV_BGR2GRAY);
found = findCirclesGrid(edges,size,corners
,CALIB_CB_ASYMMETRIC_GRID
);
if(found) frame.convertTo(edges,-1,0.2);
drawChessboardCorners(edges,size,corners,found);
imshow("edges", edges);
if(found){
if(waitKey(200)>=0){
objectPoints.push_back(chess);
imagePoints.push_back(corners);
if(--counter<= 0)
break;
}
}
else waitKey(30);
}
calibrateCamera(objectPoints,imagePoints,Size(800,600),camera,distortion,rvecs,tvecs,0);
if(found) imwrite("/home/ryan/snapshot.png",edges);
cout << camera << endl;
return 0;
}
Depends on the camera/lens and the accuracy you require, but you probably need more than 10 positions and you need to cover a wider range of view angles.
I'm assuming from the 800x600 that this is a webcam with a simple wide angle lens with lots of distortions. I would say you need 6-8 positions/rotations of the target in each of 3-4 different angles to the camera. You also need to make sure that the target and camera are fixed and don't move during an image. Again assuming the camera has simple autogain you should ensure the target is very well lit so it will use a fast shutter speed and low gain.
One issue with the technique used by openCV is that it needs to see all the corners/dots on the target for a frame to be identified and used in the solution - so it's quite hard to get point near the corners of the image. You should check the data for the number of images actually used in calibration - it maybe that it's only finding all the points on a few of the 10 images and basing the solution on that subset.
It is also important not to take only patterns perpendicular to the camera, but to rotate them. To improve results quality you can also check closely the position of the detected corners, remove the pictures where some corners were not correctly detected and run the algorithm again.
I don't know which camera you're using, but with cameras suffering from great distortion and not sharp enough the corners can become hard to detect correctly. OpenCV calibration can also be realised with a circle pattern which gives better results in this case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With