Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Opencv Transforming Image

I am new to Open Cv, I want to transform the two images src and dst image . I am using cv::estimateRigidTransform() to calculate the transformation matrix and after that using cv::warpAffine() to transform from dst to src. when I compare the new transformed image with src image it is almost same (transformed), but when I am getting the abs difference of new transformed image and the src image, there is lot of difference. what should I do as My dst image has some rotation and translation factor as well. here is my code

cv::Mat transformMat = cv::estimateRigidTransform(src, dst, true);
cv::Mat output;
cv::Size dsize = leftImageMat.size();    //This specifies the output image size--change needed
cv::warpAffine(src, output, transformMat, dsize);

Src Image

enter image description here

destination Image

enter image description here

output image

enter image description here

absolute Difference Image

enter image description here

Thanks

like image 458
Mudasar Avatar asked May 31 '13 09:05

Mudasar


People also ask

What is transformation matrix in OpenCV?

In affine transformation, all parallel lines in the original image will still be parallel in the output image. To find the transformation matrix, we need three points from input image and their corresponding locations in output image. Then cv. getAffineTransform will create a 2x3 matrix which is to be passed to cv.

What is affine transformation OpenCV?

What is an Affine Transformation? A transformation that can be expressed in the form of a matrix multiplication (linear transformation) followed by a vector addition (translation). From the above, we can use an Affine Transformation to express: Rotations (linear transformation)

What is geometric transformation in OpenCV?

This includes translation, rotation, scaling, and non-linear warping of images. It is used to make some changes to any given geometric shape. Python OpenCV provides two transformation functions with which we can perform all kinds of transformations - cv. warpAffine() and cv.


1 Answers

You have some misconceptions about the process.

The method cv::estimateRigidTransform takes as input two sets of corresponding points. And then solves set of equations to find the transformation matrix. The output of the transformation matches src points to dst points (exactly or closely, if exact match is not possible - for example float coordinates).

If you apply estimateRigidTransform on two images, OpenCV first find matching pairs of points using some internal method (see opencv docs).

cv::warpAffine then transforms the src image to dst according to given transformation matrix. But any (almost any) transformation is loss operation. The algorithm has to estimate some data, because they aren't available. This process is called interpolation, using known information you calculate the unknown value. Some info regarding image scaling can be found on wiki. Same rules apply to other transformations - rotation, skew, perspective... Obviously this doesn't apply to translation.

Given your test images, I would guess that OpenCV takes the lampshade as reference. From the difference is clear that the lampshade is transformed best. Default the OpenCV uses linear interpolation for warping as it's fastest method. But you can set more advances method for better results - again consult opencv docs.

Conclusion: The result you got is pretty good, if you bear in mind, it's result of automated process. If you want better results, you'll have to find another method for selecting corresponding points. Or use better interpolation method. Either way, after the transform, the diff will not be 0. It virtually impossible to achieve that, because bitmap is discrete grid of pixels, so there will always be some gaps, which needs to be estimated.

like image 67
jnovacho Avatar answered Sep 20 '22 19:09

jnovacho