Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to apply the camera pose transformation computed using EPnP to the VTK camera?

For my augmented reality project, I have a 3D model viewed using VTK camera and a real object of the model viewed using a real camera.

I used EPnP to estimate the extrinsic matrix of the real camera (this camera has already been calibrated before hand, so I know the internal parameters) by giving 3D points from VTK and its corresponding 2D points from real camera image and the internal parameters of the real camera for the EPnP algorithm to work.

After that, I obtained a rotation and translation matrix with the elements -> R1, R2, R3, ....., R9 and t1, t2 and t3.

So my extrinsic matrix of the real camera looks like this (let's call this extrinsicReal)

R1 R2 R3 T1
R4 R5 R6 T2
R7 R8 R9 T3
 0  0  0  1

After this, I estimate the extrinsic matrix of my VTK camera using the following code:

vtkSmartPointer<vtkMatrix4x4> extrinsicVTK = vtkSmartPointer<vtkMatrix4x4>::New();
extrinsicVTK->DeepCopy(renderer->GetActiveCamera()->GetViewTransformMatrix());

To fuse the VTK camera 3D model with the real camera, the VTK camera should be set to a position which is same as that of the real camera position and the focal length of the VTK camera should be same as that of the real camera. Another important step is to apply the same extrinsic matrix of the real camera to the VTK camera. How do I do it?

What I did was I took the inverse of the extrinsicReal and multiplied this with the extrinsicVTK to get a new 4*4 matrix (let's call it newMatrix). I applied this matrix for the transformation of VTK camera.

vtkSmartPointer<vtkMatrix4x4> newMatrix = vtkSmartPointer<vtkMatrix4x4>::New();
vtkMatrix4x4::Multiply4x4(extrinsicRealInvert,extrinsicVTK,newMatrix);

vtkSmartPointer<vtkTransform> transform = vtkSmartPointer<vtkTransform>::New();
transform->SetMatrix(NewM); 
transform->Update();

renderer->GetActiveCamera()->ApplyTransform(transform);

I am not really sure if this is the correct method. But I checked the real camera position (which I got after EPnP) and the VTK camera position (after applying the transform above) and they are both exactly same. Also, the orientation of the real camera and the direction of projection of the VTK camera are also the same.

The problem is that even after the above parameters are matching for both the VTK and the real camera, the 3D VTK model does not seem to be perfectly aligned to the real camera video. Can someone guide me step by step to debug the issue?

like image 662
j1897 Avatar asked Aug 28 '14 02:08

j1897


1 Answers

Yeah things get complicated when applying those parameters to the vtk camera. Here is how I did it (just excerpts of the important code passages, the whole code would be way too much to paste here and would be useless for you anyway). Other points to consider:

  1. I am rendering the endoscope image as background texture in my vtkRenderWindow.
  2. I am using a mix of VTK, ITK (vnl), OpenCV functions but they should be interchangeable (e.g. cvRound could also be replaced by vtkMath::Round() etc.)

First of all, i use the active camera from my vtkRenderer:

d->m_Renderer->GetActiveCamera()

The next step is to continously update the active camera by applying your transform. Depending on whether your render window is resizeable or not, you have to initialize or also continously update two further parameters: 1. ViewAngle, 2. WindowCenter (EXTREMELY important, not documented at all by vtk. But in the end you have to apply your principal point here which you found by calibration or you will have your surfaces rendered with an offset. Took me 3 months to find this two line solution).

Calculation of the view angle:

  double focalLengthY = _CameraIntrinsics->GetFocalLengthY();
  if( _WindowSize.height != _ImageSize.height )
  {
    double factor = static_cast<double>(_WindowSize.height)/static_cast<double>(_ImageSize.height);
    focalLengthY = _CameraIntrinsics->GetFocalLengthY() * factor;
  }

  _ViewAngle = 2 * atan( ( _WindowSize.height / 2 ) / focalLengthY ) * 180 / vnl_math::pi;

Apply the view angle:

d->m_Renderer->GetActiveCamera()->SetViewAngle(viewAngle);

Calculation of the WindowCenter:

  double px = 0;
  double width = 0;

  double py = 0;
  double height = 0;

  if( _ImageSize.width != _WindowSize.width || _ImageSize.height != _WindowSize.height )
  {
    double factor = static_cast<double>(_WindowSize.height)/static_cast<double>(_ImageSize.height);

    px = factor * _CameraIntrinsics->GetPrincipalPointX();
    width = _WindowSize.width;
    int expectedWindowSize = cvRound(factor * static_cast<double>(_ImageSize.width));
    if( expectedWindowSize != _WindowSize.width )
    {
      int diffX = (_WindowSize.width - expectedWindowSize) / 2;
      px = px + diffX;
    }

    py = factor * _CameraIntrinsics->GetPrincipalPointY();
    height = _WindowSize.height;
  }
  else
  {
    px = _CameraIntrinsics->GetPrincipalPointX();
    width = _ImageSize.width;

    py = _CameraIntrinsics->GetPrincipalPointY();
    height = _ImageSize.height;
  }

  double cx = width - px;
  double cy = py;

  _WindowCenter.x = cx / ( ( width-1)/2 ) - 1 ;
  _WindowCenter.y = cy / ( ( height-1)/2 ) - 1;

Setting the Window Center:

 d->m_Renderer->GetActiveCamera()->SetWindowCenter(_WindowCenter.x, _WindowCenter.y);

Applying the extrinsic matrix to the camera:

// create a scaling matrix (THE CLASS TRANSFORM IS A WRAPPER FOR A 4x4 Matrix, methods should be self-documenting)
d->m_ScaledTransform = Transform::New();
d->m_ScaleMat.set_identity();
d->m_ScaleMat(1,1) = -d->m_ScaleMat(1,1);
d->m_ScaleMat(2,2) = -d->m_ScaleMat(2,2);

// scale the matrix appropriately (m_VnlMat is a VNL 4x4 Matrix)
d->m_VnlMat = d->m_CameraExtrinsicMatrix->GetMatrix();
d->m_VnlMat = d->m_ScaleMat * d->m_VnlMat;
d->m_ScaledTransform->SetMatrix( d->m_VnlMat );

d->m_VnlRotation = d->m_ScaledTransform->GetVnlRotationMatrix();
d->m_VnlRotation.normalize_rows();
d->m_VnlInverseRotation = vnl_matrix_inverse<mitk::ScalarType>( d->m_VnlRotation );

// rotate translation vector by inverse rotation P = P'
d->m_VnlTranslation = d->m_ScaledTransform->GetVnlTranslation();
d->m_VnlTranslation = d->m_VnlInverseRotation * d->m_VnlTranslation;
d->m_VnlTranslation *= -1;  // save -P'

// from here proceed as normal
// focalPoint = P-viewPlaneNormal, viewPlaneNormal is rotation[2]
d->m_ViewPlaneNormal[0] = d->m_VnlRotation(2,0);
d->m_ViewPlaneNormal[1] = d->m_VnlRotation(2,1);
d->m_ViewPlaneNormal[2] = d->m_VnlRotation(2,2);

d->m_vtkCamera->SetPosition(d->m_VnlTranslation[0], d->m_VnlTranslation[1], d->m_VnlTranslation[2]);

d->m_vtkCamera->SetFocalPoint( d->m_VnlTranslation[0] - d->m_ViewPlaneNormal[0],
                               d->m_VnlTranslation[1] - d->m_ViewPlaneNormal[1],
                               d->m_VnlTranslation[2] - d->m_ViewPlaneNormal[2] );
d->m_vtkCamera->SetViewUp( d->m_VnlRotation(1,0), d->m_VnlRotation(1,1), d->m_VnlRotation(1,2) );

And finally do a clipping range reset:

d->m_Renderer->ResetCameraClippingRange();

Hope this helps. I don't have the time to explain more details. Especially the last code (applying the extrinsics to the camera) has some implications which are connect to the coordinate system orientation. But that worked for me.

Best Michael

like image 113
Michael Avatar answered Sep 21 '22 11:09

Michael