Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV: rotation/translation vector to OpenGL modelview matrix

I'm trying to use OpenCV to do some basic augmented reality. The way I'm going about it is using findChessboardCorners to get a set of points from a camera image. Then, I create a 3D quad along the z = 0 plane and use solvePnP to get a homography between the imaged points and the planar points. From that, I figure I should be able to set up a modelview matrix which will allow me to render a cube with the right pose on top of the image.

The documentation for solvePnP says that it outputs a rotation vector "that (together with [the translation vector] ) brings points from the model coordinate system to the camera coordinate system." I think that's the opposite of what I want; since my quad is on the plane z = 0, I want a a modelview matrix which will transform that quad to the appropriate 3D plane.

I thought that by performing the opposite rotations and translations in the opposite order I could calculate the correct modelview matrix, but that seems not to work. While the rendered object (a cube) does move with the camera image and seems to be roughly correct translationally, the rotation just doesn't work at all; it on multiple axes when it should only be rotating on one, and sometimes in the wrong direction. Here's what I'm doing so far:

std::vector<Point2f> corners;
bool found = findChessboardCorners(*_imageBuffer, cv::Size(5,4), corners,
                                      CV_CALIB_CB_FILTER_QUADS |
                                      CV_CALIB_CB_FAST_CHECK);
if(found)
{
  drawChessboardCorners(*_imageBuffer, cv::Size(6, 5), corners, found);

  std::vector<double> distortionCoefficients(5);  // camera distortion
  distortionCoefficients[0] = 0.070969;
  distortionCoefficients[1] = 0.777647;
  distortionCoefficients[2] = -0.009131;
  distortionCoefficients[3] = -0.013867;
  distortionCoefficients[4] = -5.141519;

  // Since the image was resized, we need to scale the found corner points
  float sw = _width / SMALL_WIDTH;
  float sh = _height / SMALL_HEIGHT;
  std::vector<Point2f> board_verts;
  board_verts.push_back(Point2f(corners[0].x * sw, corners[0].y * sh));
  board_verts.push_back(Point2f(corners[15].x * sw, corners[15].y * sh));
  board_verts.push_back(Point2f(corners[19].x * sw, corners[19].y * sh));
  board_verts.push_back(Point2f(corners[4].x * sw, corners[4].y * sh));
  Mat boardMat(board_verts);

  std::vector<Point3f> square_verts;
  square_verts.push_back(Point3f(-1, 1, 0));                              
  square_verts.push_back(Point3f(-1, -1, 0));
  square_verts.push_back(Point3f(1, -1, 0));
  square_verts.push_back(Point3f(1, 1, 0));
  Mat squareMat(square_verts);

  // Transform the camera's intrinsic parameters into an OpenGL camera matrix
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();

  // Camera parameters
  double f_x = 786.42938232; // Focal length in x axis
  double f_y = 786.42938232; // Focal length in y axis (usually the same?)
  double c_x = 217.01358032; // Camera primary point x
  double c_y = 311.25384521; // Camera primary point y


  cv::Mat cameraMatrix(3,3,CV_32FC1);
  cameraMatrix.at<float>(0,0) = f_x;
  cameraMatrix.at<float>(0,1) = 0.0;
  cameraMatrix.at<float>(0,2) = c_x;
  cameraMatrix.at<float>(1,0) = 0.0;
  cameraMatrix.at<float>(1,1) = f_y;
  cameraMatrix.at<float>(1,2) = c_y;
  cameraMatrix.at<float>(2,0) = 0.0;
  cameraMatrix.at<float>(2,1) = 0.0;
  cameraMatrix.at<float>(2,2) = 1.0;

  Mat rvec(3, 1, CV_32F), tvec(3, 1, CV_32F);
  solvePnP(squareMat, boardMat, cameraMatrix, distortionCoefficients, 
               rvec, tvec);

  _rv[0] = rvec.at<double>(0, 0);
  _rv[1] = rvec.at<double>(1, 0);
  _rv[2] = rvec.at<double>(2, 0);
  _tv[0] = tvec.at<double>(0, 0);
  _tv[1] = tvec.at<double>(1, 0);
  _tv[2] = tvec.at<double>(2, 0);
}

Then in the drawing code...

GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, -tv[1], -tv[0], -tv[2]);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[0], 1.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[1], 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[2], 0.0f, 0.0f, 1.0f);

The vertices I'm rendering create a cube of unit length around the origin (i.e. from -0.5 to 0.5 along each edge.) I know with OpenGL translation functions performed transformations in "reverse order," so the above should rotate the cube along the z, y, and then x axes, and then translate it. However, it seems like it's being translated first and then rotated, so perhaps Apple's GLKMatrix4 works differently?

This question seems very similar to mine, and in particular coder9's answer seems like it might be more or less what I'm looking for. However, I tried it and compared the results to my method, and the matrices I arrived at in both cases were the same. I feel like that answer is right, but that I'm missing some crucial detail.

like image 548
Mitch Lindgren Avatar asked Apr 25 '12 09:04

Mitch Lindgren


1 Answers

You have to make sure the axis are facing the correct direction. Especially, the y and z axis are facing different directions in OpenGL and OpenCV to ensure the x-y-z basis is direct. You can find some information and code (with an iPad camera) in this blog post.

-- Edit -- Ah ok. Unfortunately, I used these resources to do it the other way round (opengl ---> opencv) to test some algorithms. My main issue was that the row order of the images was inverted between OpenGL and OpenCV (maybe this helps).

When simulating cameras, I came across the same projection matrices that can be found here and in the generalized projection matrix paper. This paper quoted in the comments of the blog post also shows some link between computer vision and OpenGL projections.

like image 190
sansuiso Avatar answered Oct 04 '22 00:10

sansuiso