I'm working on a project of image stitching using OpenCV 2.3.1 on Visual Studio 2010.
I'm currently having 2 problems.
(My reputation is not over 10 so I can only post 2 hyperlinks in this post. I'll post another 2 in the comment area)
I followed the steps mentioned in the following link Stitching 2 images in opencv
and the picture below is the result I currently have:
Two images are taken using a camera at the same position but in different direction(I used a tripod).
Then I tried another test. This time I still take 2 images using the same camera. However, I moved my camera a bit from its original position and then I took the second picture. The result is rather terrible as shown:
Problem1:**Does it mean that **if the 2 cameras are at different positions, the standard panorama stitching technique (based on a homography or camera rotational model) won't work?
I tried to stitch images taken at different positions because in the future I would like to implement the stitching algorithm on 2 cameras in different positions so as to widen the FOV, sort of like this:(I'll post the picture in the comment, plz check Widen FOV)
but now it looks like I'm going the wrong way :(.
I just found out that during the algorithm, the feature finding and matching takes most of the time.
Problem 2: Can I just compute features in certain part(Overlap area) of the 2 images and still perform transformation using the Homography? i.e , NOT to compute the whole image.
I think in this way because I think it's not necessary to compute features in the whole image if I specifify the amount of the overlap area between 2 images. If I can just compute and match the features in the overlap area it should greatly increase the speed.
The first code shown below is the original code which computes features across the whole images.
int minHessian = 3000;
SurfFeatureDetector detector( minHessian );
vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( frm1, keypoints_1 );
detector.detect( frm2, keypoints_2 );
//-- Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor; ///
Mat descriptors_1, descriptors_2;
extractor.compute( frm1, keypoints_1, descriptors_1 );
extractor.compute( frm2, keypoints_2, descriptors_2 );
I did the following thing to try to reduce the time required to run the whole algorithm:
//detector.detect( frm1(Rect(0.5*frm1.cols,0,0.5*frm1.cols,frm1.rows)), keypoints_1 );
//detector.detect( frm2(Rect(0,0,0.6*frm2.cols,frm2.rows)), keypoints_2 );
//-- Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor; ///
Mat descriptors_1, descriptors_2;
extractor.compute( frm1(Rect(0.5*frm1.cols,0,0.5*frm1.cols,frm1.rows)), keypoints_1, descriptors_1 );
extractor.compute( frm2(Rect(0,0,0.6*frm2.cols,frm2.rows)), keypoints_2, descriptors_2 );
Using the code above, the computation time is significantly decreased while giving a bad result: (I'll post the picture in the comment, plz check Bad Result)
Currently stuck and have no idea what to do next. Really hope and aprreciate any help. Thanks.
Problem 1: I can't be very sure, but the problem with the stitching does seem to be due to the camera translation between the 2 pictures. With only a global homography transform, there's no way you can overlay the 2 images perfectly. Homography only suffices in the following 2 cases:
That said, your scene is fairly planar (objects are fairly far away compared to the translation of the camera) if not for the bottle. So an approximation by homography may still be sufficient. You just need to blend the images properly. To do so, you first need to find a place to "cut" the images where there is minimum difference between the 2 images, and apply (e.g. laplacian) blending. For your problem of cameras mounted on top of the car, this approximation may still be reasonable, so you may still be able to use a homography model.
If homography with proper blending is not sufficient, you may need to look at either 3D reconstruction techniques, or other methods that "relaxes" the homography requirement. There's a couple of papers in the literature that deals with parallax during mosaicking. These are however significantly more complex than the basic homography stitching though.
Problem 2: Yes, that can be done, as long you are very sure where the overlap is. However, you need to make sure that this overlapping region is not too small, or else the homography that you compute may be skewed. The problem with your office dataset appears to be due to camera translation, as explained before.
Lastly, you might want to adjust your SURF feature detection/matching parameters a bit. The feature points seem to be slightly on the low side.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With