Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV stitch images by warping both

I already found a lot questions and answers about image stitching and warping with OpenCV but I still could not find an answer to my question.

I have two fisheye cameras which I calibrated successfully so the distortion is removed in both images.

Now I want to stitch those rectified images together. So I pretty much follow this example which is also mentioned in a lot of other stitching questions: Image Stitching Example

So I do the Keypoint and Descriptor detection. I find matches and also get the Homography matrix so I can warp one of the images which gives me a really stretched image as result. The other image stays untouched. The stretching is something I want to avoid. So I found a nice solution here: Stretch solution.

On slide 7 you can see that both images are warped. I think this will reduce the stretching of one image (in my opinion the stretching will be separated like for example 50:50). If I am wrong please tell me.

The problem I have is that I don't know how to warp two images so that they fit. Do I have to calculate two homografies? Do I have to define a reference plane like a Rect() or something? How to achieve a warping result as shown on slide 7?

To make it clear, I am not studying at TU Dresden so this is just something I found while doing research.

like image 978
DamBedEi Avatar asked Jul 14 '15 14:07

DamBedEi


People also ask

How do I put two pictures together on cv2?

Image Addition You can add two images with the OpenCV function, cv. add(), or simply by the numpy operation res = img1 + img2. Both images should be of same depth and type, or the second image can just be a scalar value.


1 Answers

Warping one of the two images in the coordinate frame of the other is more common because it is easier: one can directly compute the 2D warping transformation from image correspondences.

Warping both images into a new coordinate frame is possible but more complex, because it involves 3D transformations and require to accurately define a new 3D coordinate frame with respect to the initial two.

The basic idea is (very roughly) represented in the hand drawing on the slide #2 in the linked presentation. I made a bigger one:

enter image description here

Basically, the procedure would be as follows:

  1. If your cameras are calibrated, you can estimate the relative 3D pose between the two images exclusively from feature correspondences by computing the fundamental matrix, deducing the essential matrix [HZ03 paragraph 9.6 and equation 9.12], and deducing the relative pose [HZ03 paragraph 9.6.2]. Hence, you can estimate for example the 3D rigid transformation T2<-1 mapping the coordinate frame of img1 onto the coordinate frame of img2:

T2<-1 = R2<-1 * [ I3 | 0 ]

  1. From this, you can define very accurately the image plane for the new image, with respect to the other two images. For example:

Tn<-1 = square_root( R2<-1) * [ I3 | 0 ]

Tn<-2 = Tn<-1 * T2<-1-1

  1. From these two relative poses, you can derive the pixel 2D transformations to warp the two images in the new image plane [HZ03, example 13.2]. Basically, the warping homography respecively from img1 to the new image and from img2 to the new image are:

Hn<-1 = K * Rn<-1 * K-1

Hn<-2 = K * Rn<-2 * K-1

  1. Then you can also compute the range of valid pixels (i.e. xmin, xmax, ymin, ymax) in the new image plane, to crop it and form a new image.

Note that step #3 assumes that the images are taken from the same point in space (pure camera rotation), otherwise there could be some parallax between the images, which could produce visible stitching imperfections.

Hope this helps.

Reference: [HZ03] Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.

like image 77
BConic Avatar answered Sep 29 '22 14:09

BConic