I am a newbie in OpenCV. I am working with the following formula to calculate distance:
distance to object (mm) = focal length (mm) * real height of the object (mm) * image height (pixels) ---------------------------------------------------------------- object height (pixels) * sensor height (mm)
Is there a function in OpenCV that can determine object distance? If not, any reference to sample code?
The formula: distance = size_obj * focal_length / size_obj_on_sensor. The whole method depends on figuring out the size of the object as it appears on the sensor given the focal length and the measured object size. Otherwise you have two unknowns.
It is called magnitude() and it calculates the distance for you. And if you have a vector of more than 4 vectors to calculate distances, it will use SSE (i think) to make it faster.
You need to know one of 2 things up front
I'm going to use focal-length since I don't want to google for the sensor datasheet.
Use the OpenCV calibrate.py
tool and the Chessboard pattern PNG provided in the source code to generate a calibration matrix. I took about 2 dozen photos of the chessboard from as many angles as I could and exported the files to my Mac. For more detail check OpenCV's camera calibration docs.
RMS: 1.13707201375 camera matrix: [[ 2.80360356e+03 0.00000000e+00 1.63679133e+03] [ 0.00000000e+00 2.80521893e+03 1.27078235e+03] [ 0.00000000e+00 0.00000000e+00 1.00000000e+00]] distortion coefficients: [ 0.03716712 0.29130959 0.00289784 -0.00262589 -1.73944359]
Checking the details of the series of chessboard photos you took, you will find the native resolution (3264x2448) of the photos and in their JPEG EXIF headers, visible in iPhoto, you can find the Focal Length value (4.15mm). These items should vary depending on camera.
We need to know the pixels per millimeter (px/mm) on the image sensor. From the page on camera resectioning we know that f_x and f_y are focal-length times a scaling factor.
f_x = f * m_x f_y = f * m_y
Since we have two of the variables for each formula we can solve for m_x and m_y. I just averaged 2803 and 2805 to get 2804.
m = 2804px / 4.15mm = 676px/mm
I used OpenCV (C++) to grab out the Rotated Rect of the points and determined the size of the object to be 41px. Notice I have already retrieved the corners of the object and I ask the bounding rectangle for its size.
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
The object is 41px in a video shot on the camera @ 640x480.
3264/676 = 640/x x = 133 px/mm
So given 41px/133px/mm we see that the size of the object on the image sensor is .308mm .
distance_mm = object_real_world_mm * focal-length_mm / object_image_sensor_mm distance_mm = 70mm * 4.15mm / .308mm distance_mm = 943mm
This happens to be pretty good. I measured 910mm and with some refinements I can probably reduce the error.
Feedback is appreciated.
Adrian at pyimagesearch.com
demonstrated a different technique using similar triangles. We discussed this topic beforehand and he took the similar triangles approach and I did camera intrinsics.
there is no such function available in opencv to calculate the distance between object and the camera. see this : Finding distance from camera to object of known size
You should know that the parameters depend on the camera and will change if the camera is changed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With