Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV Computing Camera Position && Rotation

for a project I need to compute the real world position and orientation of a camera with respect to a known object. I have a set of photos, each displays a chessboard from different points of view. Using CalibrateCamera and solvePnP I am able to reproject Points in 2d, to get a AR-thing. So my situation is as such:

  • Intrinsic parameters are known

  • Distortioncoefficients are known

  • translation Vector and rotation Vector are known per photo.

I simply cannot figure out how to compute the position of the camera. My guess was:

  • invert translation vector. (=t')

  • transform rotation vector to degree (seems to be radian) and invert

  • use rodriguez on rotation vector

  • compute RotationMatrix * t'

But the results are somehow totally off... Basically I want to to compute a ray for each pixel in world coordinates. If more informations on my problem are needed, I'd be glad to answer quickly.

I dont' get it... somehow the rays are still off. This is my Code btw:

Mat image1CamPos = tvecs[0].clone(); //From calibrateCamera
Mat rot = rvecs[0].clone(); //From calibrateCamera
Rodrigues(rot, rot);
rot = rot.t();

//Position of Camera
Mat pos = rot * image1CamPos;

//Ray-Normal (( (double)mk[i][k].x) are known image-points)
float x = (( (double)mk[i][0].x) / fx) - (cx / fx);
float y = (( (double)mk[i][0].y) / fy) - (cy / fy);
float z = 1;
float mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;

Mat unit(3, 1, CV_64F);
unit.at<double>(0, 0) = x;
unit.at<double>(1, 0) = y;
unit.at<double>(2, 0) = z;

//Rotation of Ray
Mat rot = stof1 * unit;

But when plotting this, the rays are off :/

like image 782
Teris Avatar asked Dec 19 '12 16:12

Teris


1 Answers

The translation t (3x1 vector) and rotation R (3x3 matrix) of an object with respect to the camera equals the coordinate transformation from object into camera space, which is given by:

v' = R * v  + t

The inversion of the rotation matrix is simply the transposed:

R^-1 = R^T

Knowing this, you can easily resolve the transformation (first eq.) to v:

v = R^T * v' - R^T * t

This is the transformation from camera into object space, i.e., the position of the camera with respect to the object (rotation = R^T and translation = -R^T * t).

You can simply get a 4x4 homogeneous transformation matrix from this:

T = ( R^T  -R^T * t )
    (  0       1    )

If you now have any point in camera coordinates, you can transform it into object coordiantes:

p' = T * (x, y, z, 1)^T

So, if you'd like to project a ray from a pixel with coordinates (a,b) (probably you will need to define the center of the image, i.e. the principal point as reported by CalibrateCamera, as (0,0)) -- let that pixel be P = (a,b)^T. Its 3D coordinates in camera space are then P_3D = (a,b,0)^T. Let's project a ray 100 pixel in positive z-direction, i.e. to the point Q_3D = (a,b,100)^T. All you need to do is transform both 3D coordinates into the object coordinate system using the transformation matrix T and you should be able to draw a line between both points in object space. However, make sure that you don't confuse units: CalibrateCamera will report pixel values while your object coordinate system might be defined in, e.g., cm or mm.

like image 74
Niko Avatar answered Oct 26 '22 23:10

Niko