I'm currently working on a project in which I have to estimate 3D coordinates of 2D interest points detected using a monocular camera.
To be more precise, I have in input an image sequence (calibrated) and it is required, when receiving a new image, to triangulate points between the left (previous) image and the right current one to get 3D points.
To do this, I'm following these steps:
The resulting 3D points are not correct when I reproject them on the images. But, I have read that the triangulated points are defined to only up to an indeterminant scale factor.
So my question is: What does "up to scale" means in this context? And what is the solution to get the real 3D points in the scene's world coordinate frame?
I would be thankful for any help!
You are likely to have a bug or a poorly estimated essential matrix. The unknown scale factor cannot be responsible for the reconstruction errors you see. Regardless of global scale, the result of projecting onto the image pair a 3d point estimated from good matches and with a valid essential matrix should be consistent.
The meaning of "up to scale" in this context is that, even with intrinsically calibrated cameras, the standard method of estimating the essential matrix yields the same result if you replace your scene with one in which everything is larger or smaller by the same amount. You can remove this ambiguity either
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With