I have a mesh of, say, 11 x 11 vertexes. The idea is that every point(vertex) holds the normalized floating-point position of that pixel in the warped image. E.g. if I want to stretch the upper left edge, I write in the first vertex (-0.1, -0.1).
I have the grid, but not the image warping function. cv::remap does exactly that ... but in the reverse order - the mesh "says" which pixel neighborhoods to map to a regular grid on the output.
Is there a standard way in OpenCV to handle reverse warping? Can I easily transform the mesh or use another function? I am using OpenCV and Boost, but any free library/tool that does that will work for me.
PS: I need this running on a linux PC.
You need to calculate another maps for the reverse transform.
But, for that you need the transform formula, or matrix.
Step 1: Select 4 points on the remapped image. A good idea would be to take the corners, if the corners are not black (undefined)
Step 2: Find their place in the original image(look into the maps for that)
Step 3: Compute the homography between the two sets of points. findHomoraphy()
is the key.
Step 4: warpPerspective the second image. Internally, it calculates the grids, then calls remap();
If you have the same transform as earlier, invert input points with output points in findHomography, or inv() the resulting matrix.
If you want to have the maps for multiple calls (it's faster than calling warpPerspective each time), you have to copy the code from warpPerspective in a new function.
You may want to take a look at http://code.google.com/p/imgwarp-opencv/. This library seems to be exactly what you need: image warping based on a sparse grid.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With