I already found a lot questions and answers about image stitching and warping with OpenCV but I still could not find an answer to my question.
I have two fisheye cameras which I calibrated successfully so the distortion is removed in both images.
Now I want to stitch those rectified images together. So I pretty much follow this example which is also mentioned in a lot of other stitching questions: Image Stitching Example
So I do the Keypoint and Descriptor detection. I find matches and also get the Homography matrix so I can warp one of the images which gives me a really stretched image as result. The other image stays untouched. The stretching is something I want to avoid. So I found a nice solution here: Stretch solution.
On slide 7 you can see that both images are warped. I think this will reduce the stretching of one image (in my opinion the stretching will be separated like for example 50:50). If I am wrong please tell me.
The problem I have is that I don't know how to warp two images so that they fit. Do I have to calculate two homografies? Do I have to define a reference plane like a Rect() or something? How to achieve a warping result as shown on slide 7?
To make it clear, I am not studying at TU Dresden so this is just something I found while doing research.
Image Addition You can add two images with the OpenCV function, cv. add(), or simply by the numpy operation res = img1 + img2. Both images should be of same depth and type, or the second image can just be a scalar value.
Warping one of the two images in the coordinate frame of the other is more common because it is easier: one can directly compute the 2D warping transformation from image correspondences.
Warping both images into a new coordinate frame is possible but more complex, because it involves 3D transformations and require to accurately define a new 3D coordinate frame with respect to the initial two.
The basic idea is (very roughly) represented in the hand drawing on the slide #2 in the linked presentation. I made a bigger one:
Basically, the procedure would be as follows:
T2<-1 = R2<-1 * [ I3 | 0 ]
Tn<-1 = square_root( R2<-1) * [ I3 | 0 ]
Tn<-2 = Tn<-1 * T2<-1-1
Hn<-1 = K * Rn<-1 * K-1
Hn<-2 = K * Rn<-2 * K-1
Note that step #3 assumes that the images are taken from the same point in space (pure camera rotation), otherwise there could be some parallax between the images, which could produce visible stitching imperfections.
Hope this helps.
Reference: [HZ03] Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With