Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

reprojectImageTo3D() in OpenCV

Tags:

I've been trying to compute real world coordinates of points from a disparity map using the reprojectImageTo3D() function provided by OpenCV, but the output seems to be incorrect.

I have the calibration parameters, and compute the Q matrix using

stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs, frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);

I believe this first step is correct, since the stereo frames are being rectified properly, and the distortion removal I'm performing also seems all right. The disparity map is being computed with OpenCV's block matching algorithm, and it looks good too.

The 3D points are being calculated as follows:

cv::Mat XYZ(disparity8U.size(),CV_32FC3); reprojectImageTo3D(disparity8U, XYZ, Q, false, CV_32F);

But for some reason they form some sort of cone, and are not even close to what I'd expect, considering the disparity map. I found out that other people had a similar problem with this function, and I was wondering if someone has the solution.

Thanks in advance!

[EDIT]

stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs,frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);

initUndistortRectifyMap(left_cam_matrix, left_dist_coeffs, R1, P1, frame_size,CV_32FC1, left_undist_rect_map_x, left_undist_rect_map_y);
initUndistortRectifyMap(right_cam_matrix, right_dist_coeffs, R2, P2, frame_size, CV_32FC1, right_undist_rect_map_x, right_undist_rect_map_y);
cv::remap(left_frame, left_undist_rect, left_undist_rect_map_x, left_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::remap(right_frame, right_undist_rect, right_undist_rect_map_x, right_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);

cv::Mat imgDisparity32F = Mat( left_undist_rect.rows, left_undist_rect.cols, CV_32F );  
StereoBM sbm(StereoBM::BASIC_PRESET,80,5);
sbm.state->preFilterSize  = 15;
sbm.state->preFilterCap   = 20;
sbm.state->SADWindowSize  = 11;
sbm.state->minDisparity   = 0;
sbm.state->numberOfDisparities = 80;
sbm.state->textureThreshold = 0;
sbm.state->uniquenessRatio = 8;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 0;

// Compute disparity
sbm(left_undist_rect, right_undist_rect, imgDisparity32F, CV_32F );

// Compute world coordinates from the disparity image
cv::Mat XYZ(disparity32F.size(),CV_32FC3);
reprojectImageTo3D(disparity32F, XYZ, Q, false, CV_32F);
print_3D_points(disparity32F, XYZ);

[EDIT]

Adding the code used to compute 3D coords from disparity:

cv::Vec3f *StereoFrame::compute_3D_world_coordinates(int row, int col,
  shared_ptr<StereoParameters> stereo_params_sptr){

 cv::Mat Q_32F;

 stereo_params_sptr->Q_sptr->convertTo(Q_32F,CV_32F);
 cv::Mat_<float> vec(4,1);

 vec(0) = col;
 vec(1) = row;
 vec(2) = this->disparity_sptr->at<float>(row,col);

 // Discard points with 0 disparity    
 if(vec(2)==0) return NULL;
 vec(3)=1;              
 vec = Q_32F*vec;
 vec /= vec(3);
 // Discard points that are too far from the camera, and thus are highly
 // unreliable
 if(abs(vec(0))>10 || abs(vec(1))>10 || abs(vec(2))>10) return NULL;

 cv::Vec3f *point3f = new cv::Vec3f();
 (*point3f)[0] = vec(0);
 (*point3f)[1] = vec(1);
 (*point3f)[2] = vec(2);

    return point3f;
}
like image 530
edu_ Avatar asked Mar 15 '14 02:03

edu_


People also ask

Which module of OpenCV includes 3D reconstruction and calibration?

OpenCV: Camera calibration and 3D reconstruction (calib3d module)

What does camera calibration do?

The camera calibration aims to determine the geometric parameters of the image formation process [1]. This is a crucial step in many computer vision applications especially when metric information about the scene is required.


1 Answers

Your code seems fine to me. It could be a bug with the reprojectImageTo3D. Try to replace it with the following code (which has the same role):

cv::Mat_<cv::Vec3f> XYZ(disparity32F.rows,disparity32F.cols);   // Output point cloud
cv::Mat_<float> vec_tmp(4,1);
for(int y=0; y<disparity32F.rows; ++y) {
    for(int x=0; x<disparity32F.cols; ++x) {
        vec_tmp(0)=x; vec_tmp(1)=y; vec_tmp(2)=disparity32F.at<float>(y,x); vec_tmp(3)=1;
        vec_tmp = Q*vec_tmp;
        vec_tmp /= vec_tmp(3);
        cv::Vec3f &point = XYZ.at<cv::Vec3f>(y,x);
        point[0] = vec_tmp(0);
        point[1] = vec_tmp(1);
        point[2] = vec_tmp(2);
    }
}

I never used reprojectImageTo3D, however I am using successfully code similar to the snippet above.

[Initial answer]

As it is explained in the documentation for StereoBM, if you request a CV_16S disparity map, you have to divide each disparity value by 16 before using them.

Hence, you should convert the disparity map as follows before using it:

imgDisparity16S.convertTo( imgDisparity32F, CV_32F, 1./16);

You can also directly request a CV_32F disparity map from the StereoBM structure, in which case you directy get the true disparities.

like image 92
BConic Avatar answered Sep 23 '22 13:09

BConic