Say I have an image which I have obtained after applying a homography transformation H to some original image. The original image is not shown. The result of the homography H applied to the original image is this image:
I want to rotate this image by 30 degrees about a suitable axis (possibly where a camera would be located, if there was one) to get this image:
How can I apply this rotation transformation using python if I don't know the camera parameters? I can only specify the degrees by which I want to rotate the image and the approximate axis about which I want to rotate. Also, how can I deduce the homography H' between the original image (before homography) and the final rotated image using H and the rotation transformation?
An interesting problem. To help explain my solution I'm going to define a few symbols:
Because your camera is rotating and not translating, you can synthesize virtual views for any rotation matrix R by warping your images with a corresponding homography matrix. Therefore you don't need to try to reconstruct the scene in 3D in order to synthesize these views.
For now I'm going to assume we have an estimate of K and give the equation for the homography from I1 to I3. This answers the last part of your question. Finally I'll give a nice way to estimate K. Then you have all you need.
Let p=(px,py) be a 2D point in I1. We define this point in homogeneous coordinates with the vector p=(px,py,1). Similarly let the point q=(qx,qy,1) be the position of point p in I3. The homography matrix H' that transforms p to q is given by H' = K R inv(K) H. For any R that you specify, you would use this to compute H' then you can warp I1 to synthesise the new view using e.g. OpenCV's warpPerspective function.
Derivation. we first apply H to get the point into I2. Next we transform the point into its position in 3D caméra coordinates by inv(K). We then apply the rotation R and finally project back onto the image with K. If you're unsure about applying projective transforms like this then I highly recommend reading in depth with Hartley and Zisserman's book Multiple View Geometry.
Computing K. For this I propose a cunning strategy using the Statue of Liberty. Specifically, notice that she is standing on a platform, which I am going to assume is square. This is the killer trick! Now we are going to do a rough camera calibration using the square. I'm going to assume there is no lens distortion and K has simplified form with K = [f,0,cx;0,f,cy;0,0,1]. This means the aspect ratio is 1 (usually roughly the case for digital cameras) and the principal point is at the centre of the image: cx=w/2 and cy=h/2 where w and h are the width and height of the image respectively. Trying to estimate lens distortion and a more complex K matrix would be very hard. Lens distortion doesn't seem significant because the edges of the wood are all roughly straight in the images, so it can be ignored.
So now we are going to compute f. This will be done using plane-based camera calibration. The famous reference for this is Zhang: A Flexible New Technique for Camera Calibration, located at https://www.microsoft.com/en-us/research/publication/a-flexible-new-technique-for-camera-calibration/
The way this would work is first to click on the 4 corners of the statue plane's four visible corners in I2 (see attached image). Let's call these p1 p2 p3 and p4, starting bottom left and going round clockwise. You can then use OpenCV's camera calibration methods to get the estimate of K from these 4 corner points. Importantly the reason why we can do this is because we know the platform is square. For a deeper insight in plane-based calibration I recommend reading Zhang's paper. If you are experiencing difficulty I could do it myself in a couple of minutes and send over the K matrix.
As a final point, a slight variation of this approach is to calibrate using your original image (assuming you still have it). The reason for this is that H could distort I2 so that its aspect ratio is not close to 1 and principal point not near the image centre. If you calibrate using your original image (let's call the matrix K1) then you would use K = H K1.
To apply the homography I would recommend using OpenCV, more specifically the warpPerspective
function https://docs.opencv.org/3.0-beta/modules/imgproc/doc/geometric_transformations.html#warpperspective
Because we are talking about a pure rotation, no camera translation, you can indeed produce the image that corresponds to this rotation by just using a homography. But to find the homography parameters as a function of the axis direction and rotation angle, you'll need to know the camera intrinsic parameters, mainly the focal length.
If you had the camera model you could work out the equations, but another way to obtain the homography matrix is just calculating what would be the destination point coordinates after the transform, and then use the findHomography
function. Or you can find the matching points, and then you calculate the homography.
If you don't have the camera model and rotation parameters, or matching points in both images, there is nothing you can do, you need any of these to find out the homography. You can try to guess the camera model perhaps. What exactly is the information you have?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With