Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reconstruct 3D-Coordinates in Camera Coordinate System from 2D - Pixels with side condition

I am trying to reconstruct 3D-Coordinates from the 2D-Pixel-Coordinates in a Camera Picture using a side condition (in MatLab). I do have extrinsic and intrinsic camera parameters.

Using homogenous transformation I can transform 3D-Coordinates from a initial World Coordinate System to my Camera Corrdinate System. So here I have my extrinsic parameters in my Transform Matrix R_world_to_Camera:

R_world_to_Camera = [ r_11, r_12, r_13, t1;
r_21, r_22, r_23, t2;
r_31, r_32, r_33, t3;
0, 0, 0, 1];

For intrinsic parameters I used Caltech's "Camera Calibration Toolbox for MatLab" and got these parameters:

Calibration results (with uncertainties): 

 Focal Length:          fc = [ 1017.21523   1012.54901 ] ± [ NaN   NaN ] 
 Principal point:       cc = [ 319.50000   239.50000 ] ± [ NaN   NaN ] 
 Skew:             alpha_c = [ 0.00000 ] ± [ NaN  ]   => angle of pixel axes = 90.00000 ± NaN degrees 
 Distortion:            kc = [ 0.00000   0.00000   0.00000   0.00000  0.00000 ] ± [ NaN   NaN   NaN   NaN    NaN ] 
 Pixel error:          err = [ 0.11596   0.14469 ] 

 Note: The numerical errors are approximately three times the standard deviations (for reference).

So I then get the Camera-Calibration-Matrix K (3x3)

K = [1.017215234570303e+03, 0, 3.195000000000000e+02; 
0, 1.012549014668498e+03,2.395000000000000e+02; 
0, 0, 1.0000];

and using this I can calculate the 3D -> 2D - Projection-Matrix P (3x4) with:

P = K * [eye(3), zeros(3,1)];

When converting a Point in World-Coordinates [X, Y, Z]_World I transform it first to Camera-Coordinates and then project it to 2D:

% Transformation
P_world = [X; Y; Z; 1]; % homogenous coordinates in World coordinate System
P_camera = R_world_to_Camera * [X; Y; Z; 1];

% Projection
P_pixels = P * camera;
P_pixels = P_pixels / P_pixels(3); % normalize coordinates

So my question now is how to reverse these steps? As side condition I want to set the Z-Coordinate to be known (zero in world-coordinates). I tried the solution proposed here on Stackoverflow, but somehow I get wrong coordinates. Any idea? Every help is appreciated a lot!!

like image 456
EliteTUM Avatar asked Jul 04 '12 18:07

EliteTUM


1 Answers

You cannot reverse the step in general: depth and scale information is lost when 3D points are projected onto a 2D image. However if, as you indicate, all your 3D points are on the Z=0 plane, then getting them back from their projections is trivial: compute the inverse Ki = K^-1 of the camera matrix, and apply it to the image points in homogeneous coordinates.

P_camera = Ki * [u, v, 1]'

where [u, v] are the image coordinates, and the apostrophe denotes transposition. The 3D points you want lie on the rays from the camera centre to the P_camera's. Express both in world coordinates:

P_world = [R|t]_camera_to_world * [P_camera, 1]'

C_world = [R|t]_camera_to_world * [0, 0, 0, 1]'

where [R|t] is the 4x4 coordinate transform. Now, the set of points on each ray is expressed as

P = C_world + lambda * P_world;

where lambda is a scalar (the coordinate along the ray). You can now impose the condition that P(3) = 0 to find the value of lambda that places your points on the Z = 0 plane.

like image 126
Francesco Callari Avatar answered Sep 21 '22 10:09

Francesco Callari