I am trying to use OpenCv to correct an image for distortion and then calculate the real world coordinates given a pixel coordinate. I can not find any examples online or in the OpenCv book of how to do this.
I have done the camera calibration with the chess board image. Now, I just need a basic function that I can give pixel coordinates to that will give me real world coordinates based off of the camera matrix, the distortion coefficients, rotational and translation vectors.
Does anyone know how to do this?
Take a look at the findHomography()
function. If you know the location in the real world of a set of points you can use this function to create transformation matrix that you can use with the function perspectiveTransform()
std::vector<Point2f> worldPoints;
std::vector<Point2f> cameraPoints;
//insert somepoints in both vectors
Mat perspectiveMat_= findHomography(cameraPoints, worldPoints, CV_RANSAC);
//use perspective transform to translate other points to real word coordinates
std::vector<Point2f> camera_corners;
//insert points from your camera image here
std::vector<Point2f> world_corners;
perspectiveTransform(camera_corners, world_corners, perspectiveMat_);
You can find more information about the function here
As I understand correctly you need a world point from image point. With a monocular camera this problem is unsolvable. You can not determine the depth (distance) of the real world point to the camera.
There are visual simultaneous localization and mapping (SLAM) algorithms that create a map of the world and compute the trajectory of the camera from a video, but they are a whole other thing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With