Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Extract transform and rotation matrices from homography?

I have 2 consecutive images from a camera and I want to estimate the change in camera pose: two pictures with camera movement

I calculate the optical flow:

Const MAXFEATURES As Integer = 100
imgA = New Image(Of [Structure].Bgr, Byte)("pic1.bmp")
imgB = New Image(Of [Structure].Bgr, Byte)("pic2.bmp")
grayA = imgA.Convert(Of Gray, Byte)()
grayB = imgB.Convert(Of Gray, Byte)()
imagesize = cvGetSize(grayA)
pyrBufferA = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
    (imagesize.Width + 8, imagesize.Height / 3)
pyrBufferB = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
    (imagesize.Width + 8, imagesize.Height / 3)
features = MAXFEATURES
featuresA = grayA.GoodFeaturesToTrack(features, 0.01, 25, 3)
grayA.FindCornerSubPix(featuresA, New System.Drawing.Size(10, 10),
                       New System.Drawing.Size(-1, -1),
                       New Emgu.CV.Structure.MCvTermCriteria(20, 0.03))
features = featuresA(0).Length
Emgu.CV.OpticalFlow.PyrLK(grayA, grayB, pyrBufferA, pyrBufferB, _
                          featuresA(0), New Size(25, 25), 3, _
                          New Emgu.CV.Structure.MCvTermCriteria(20, 0.03D),
                          flags, featuresB(0), status, errors)
pointsA = New Matrix(Of Single)(features, 2)
pointsB = New Matrix(Of Single)(features, 2)
For i As Integer = 0 To features - 1
    pointsA(i, 0) = featuresA(0)(i).X
    pointsA(i, 1) = featuresA(0)(i).Y
    pointsB(i, 0) = featuresB(0)(i).X
    pointsB(i, 1) = featuresB(0)(i).Y
Next
Dim Homography As New Matrix(Of Double)(3, 3)
cvFindHomography(pointsA.Ptr, pointsB.Ptr, Homography, HOMOGRAPHY_METHOD.RANSAC, 1, 0)

and it looks right, the camera moved leftwards and upwards: optical flow Now I want to find out how much the camera moved and rotated. If I declare my camera position and what it's looking at:

' Create camera location at origin and lookat (straight ahead, 1 in the Z axis)
Location = New Matrix(Of Double)(2, 3)
location(0, 0) = 0 ' X location
location(0, 1) = 0 ' Y location
location(0, 2) = 0 ' Z location
location(1, 0) = 0 ' X lookat
location(1, 1) = 0 ' Y lookat
location(1, 2) = 1 ' Z lookat

How do I calculate the new position and lookat?

If I'm doing this all wrong or if there's a better method, any suggestions would be very welcome, thanks!

like image 609
smirkingman Avatar asked Sep 12 '11 13:09

smirkingman


2 Answers

For pure camera rotation R = A-1HA. To prove this consider image to plane homographies H1=A and H2=AR, where A is camera intrinsic matrix. Then H12=H2*H1-1=A-1RA, from which you can obtain R

Camera translation is harder to estimate. If the camera translates you have to a find fundamental matrix first (not homography): xTFx=0 and then convert it into an essential matrix E=ATFA; Then you can decompose E into rotation and translation E=txR, where tx means a vector product matrix. Decomposition is not obvious, see this.

The rotation you get will be exact while the translation vector can be found only up to scale. Intuitively this scaling means that from the two images alone you cannot really say whether the objects are close and small or far away and large. To disambiguate we may use a familiar size objects, known distance between two points, etc.

Finally note that a human visual system has a similar problem: though we "know" the distance between our eyes, when they are converged on the object the disparity is always zero and from disparity alone we cannot say what the distance is. Human vision relies on triangulation from eyes version signal to figure out absolute distance.

like image 118
Vlad Avatar answered Sep 22 '22 08:09

Vlad


Well what your looking at is in simple terms a Pythagorean theorem problem a^2 + b^2 = c^2. However when it comes to camera based applications things are not very easy to accurately determine. You have found half of the detail you need for "a" however finding "b" or "c" is much harder.

The Short Answer

Basically it can't be done with a single camera. But it can be with done with two cameras.

The Long Winded Answer (Thought I'd explain in more depth, no pun intended)

I'll try and explain, say we select two points within our image and move the camera left. We know the distance from the camera of each point B1 is 20mm and point B2 is 40mm . Now lets assume that we process the image and our measurement are A1 is (0,2) and A2 is (0,4) these are related to B1 and B2 respectively. Now A1 and A2 are not measurements; they are pixels of movement.

What we now have to do is multiply the change in A1 and A2 by a calculated constant which will be the real world distance at B1 and B2. NOTE: Each one these is different according to measurement B*. This all relates to Angle of view or more commonly called the Field of View in photography at different distances. You can accurately calculate the constant if you know the size of each pixel on the camera CCD and the f number of the lens you have inside the camera.

I would expect this isn't the case so at different distances you have to place an object of which you know the length and see how many pixels it takes up. Close up you can use a ruler to make things easier. With these measurements. You take this data and form a curve with a line of best fit. Where the X-axis will be the distance of the object and the Y-axis will be the constant of pixel to distance ratio that you must multiply your movement by.

So how do we apply this curve. Well it's guess work. In theory the larger the measurement of movement A* the closer the object to the camera. In our example our ratios for A1 > A2 say 5mm and 3mm respectively and we would now know that point B1 has moved 10mm (2x5mm) and B2 has moved 6mm (2x6mm). But let's face it - we will never know B and we will never be able to tell if a distance moved is 20 pixels of an object close up not moving far or an object far away moving a much great distance. This is why things like the Xbox Kinect use additional sensors to get depth information that can be tied to the objects within the image.

What you attempting could be attempted with two cameras as the distance between these cameras is known the movement can be more accurately calculated (effectively without using a depth sensor). The maths behind this is extremely complex and I would suggest looking up some journal papers on the subject. If you would like me to explain the theory, I can attempt to.

All my experience comes from designing high speed video acquisition and image processing for my PHD so trust me, it can't be done with one camera, sorry. I hope some of this helps.

Cheers

Chris

[EDIT]

I was going to add a comment but this is easier due to the bulk of information:

Since it is the Kinect I will assume you have some relevant depth information associated with each point if not you will need to figure out how to get this.

The equation you will need to start of with is for the Field of View (FOV):

o/d = i/f

Where:

f is equal to the focal length of the lens usually given in mm (i.e. 18 28 30 50 are standard examples)

d is the object distance from the lens gathered from kinect data

o is the object dimension (or "field of view" perpendicular to and bisected by the optical axis).

i is the image dimension (or "field stop" perpendicular to and bisected by the optical axis).

We need to calculate i, where o is our unknown so for i (which is a diagonal measurement),

We will need the size of the pixel on the ccd this will in micrometres or µm you will need to find this information out, For know we will take it as being 14um which is standard for a midrange area scan camera.

So first we need to work out i horizontal dimension (ih) which is the number of pixels of the width of the camera multiplied by the size of the ccd pixel (We will use 640 x 320)

so: ih = 640*14um = 8960um

   = 8960/1000 = 8.96mm

Now we need i vertical dimension (iv) same process but height

so: iv = (320 * 14um) / 1000 = 4.48mm

Now i is found by Pythagorean theorem Pythagorean theorem a^2 + b^2 = c^2

so: i = sqrt(ih^2 _ iv^2)

  = 10.02 mm

Now we will assume we have a 28 mm lens. Again, this exact value will have to be found out. So our equation is rearranged to give us o is:

o = (i * d) / f

Remember o will be diagonal (we will assume of object or point is 50mm away):

o = (10.02mm * 50mm) / 28mm

17.89mm

Now we need to work out o horizontal dimension (oh) and o vertical dimension (ov) as this will give us the distance per pixel that the object has moved. Now as FOV α CCD or i is directly proportional to o we will work out a ratio k

k = i/o

= 10.02 / 17.89 

= 0.56

so:

o horizontal dimension (oh):

oh = ih / k

= 8.96mm / 0.56 = 16mm per pixel

o vertical dimension (ov):

ov = iv / k

= 4.48mm / 0.56 = 8mm per pixel

Now we have the constants we require, let's use it in an example. If our object at 50mm moves from position (0,0) to (2,4) then the measurements in real life are:

(2*16mm , 4*8mm) = (32mm,32mm)

Again, a Pythagorean theorem: a^2 + b^2 = c^2

Total distance = sqrt(32^2 + 32^2)

           = 45.25mm

Complicated I know, but once you have this in a program it's easier. So for every point you will have to repeat at least half the process as d will change on therefore o for every point your examining.

Hope this gets you on your way,

Cheers Chris

like image 5
Chris Avatar answered Sep 23 '22 08:09

Chris