I am trying to retrieve translation and rotation vectors from a computed fundamental Matrix. I do use OpenCV and the general approach is from wikipedia. My Code is like this:
//Compute Essential Matrix
Mat A = cameraMatrix(); //Computed using chessboard
Mat F = fundamentalMatrix(); //Computed using matching keypoints
Mat E = A.t() * F * A;
//Perfrom SVD on E
SVD decomp = SVD(E);
//U
Mat U = decomp.u;
//S
Mat S(3, 3, CV_64F, Scalar(0));
S.at<double>(0, 0) = decomp.w.at<double>(0, 0);
S.at<double>(1, 1) = decomp.w.at<double>(0, 1);
S.at<double>(2, 2) = decomp.w.at<double>(0, 2);
//V
Mat V = decomp.vt; //Needs to be decomp.vt.t(); (transpose once more)
//W
Mat W(3, 3, CV_64F, Scalar(0));
W.at<double>(0, 1) = -1;
W.at<double>(1, 0) = 1;
W.at<double>(2, 2) = 1;
cout << "computed rotation: " << endl;
cout << U * W.t() * V.t() << endl;
cout << "real rotation:" << endl;
Mat rot;
Rodrigues(images[1].rvec - images[0].rvec, rot); //Difference between known rotations
cout << rot << endl;
At the end I try to compare the estimated rotation to the one I computed using the chessboard which is in every Image (I plan to get the extrinsic parameters without the chessboard). For example I get this:
computed rotation:
[0.8543027125286542, -0.382437675069228, 0.352006107978011;
0.3969758209413922, 0.9172325022900715, 0.03308676972148356;
0.3355250705298953, -0.1114717965690797, -0.9354127247453767]
real rotation:
[0.9998572365450219, 0.01122579241510944, 0.01262886032882241;
-0.0114034800333517, 0.9998357441946927, 0.01408706050863871;
-0.01246864754818991, -0.01422906234781374, 0.9998210172891051]
So clearly there seems to be a problem, I just can't figure out what it could be.
EDIT: Here are the results I got with the untransposed vt(obviously from another scene):
computed rotation:
[0.8720599858028177, -0.1867080200550876, 0.4523842353671251;
0.141182538980452, 0.9810442195058469, 0.1327393312518831;
-0.4685924368239661, -0.05188790438313154, 0.8818893204535954]
real rotation
[0.8670861432556456, -0.427294988334106, 0.2560871201732064;
0.4024551137989086, 0.9038194629873437, 0.1453969040329854;
-0.2935838918455123, -0.02300806966752995, 0.9556563855167906]
Here is my computed camera matrix, the error was pretty low(about 0.17...).
[1699.001342509651, 0, 834.2587265398068;
0, 1696.645251354618, 607.1292618175946;
0, 0, 1]
Here are the results I get when trying to reproject a cube... Camera 0, the cube is axis-aligned, rotation and translation are (0, 0, 0). image http://imageshack.us/a/img802/5292/bildschirmfoto20130110u.png
and the other one, with the epilines of the points in the first image. image http://imageshack.us/a/img546/189/bildschirmfoto20130110uy.png
A rotation matrix and a translation matrix can be combined into a single matrix as follows, where the r's in the upper-left 3-by-3 matrix form a rotation and p, q and r form a translation vector. This matrix represents rotations followed by a translation.
Thus both the Essential and Fundamental matrices completely describe the geometric relationship between corresponding points of a stereo pair of cameras. The only difference between the two is that the former deals with calibrated cameras, while the latter deals with uncalibrated cameras.
Rotations and translations do not commute. Translations and scales do not commute. Scales and rotations commute only in the special case when scaling by the same amount in all directions. In general the two operations do not commute.
The reason why F is a matrix with rank 2 is that it is mapping a 2D plane (image1) to all the lines (in image 2) that pass through the epipole (of image 2).
Please take a look at this link:
http://isit.u-clermont1.fr/~ab/Classes/DIKU-3DCV2/Handouts/Lecture16.pdf.
Refer to Page 2. There are two possibilities for R. The first is UWVT and the second is UWTVT. You used the second. Try the first.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With