Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV - How to get real world distance from a 2D image using a chessboard as reference

enter image description here

After checking several pieces of codes, I took several shots, found the chessboard corners and use them to get the camera matrix, distortion coefficients, rotation, and translation vectors. Now, can someone tell me which python opencv function do I need to calculate the distance in the real world from the 2D image? project points? For example, using a chessboard as a reference (see picture), if the tile size is 5cm, the distance for 4 tiles should be 20 cm. I saw some functions like projectPoints,findHomography, solvePnP but I am not sure which one do I need to solve my problem and get the transformation matrix between the camera world and the chessboard world. 1 single camera, same position of the camera for all cases but not exactly over the chessboard, and chessboard is placed over a planar object (table)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
    objp = np.zeros((nx * ny, 3), np.float32)
    objp[:, :2] = np.mgrid[0:nx, 0:ny].T.reshape(-1, 2)

    # Arrays to store object points and image points from all the images.
    objpoints = []  # 3d points in real world space
    imgpoints = []  # 2d points in image plane.

    # Make a list of calibration images
    images = glob.glob(path.join(calib_images_dir, 'calibration*.jpg'))
    print(images)
    # Step through the list and search for chessboard corners
    for filename in images:

        img = cv2.imread(filename)

        imgScale = 0.5
        newX,newY = img.shape[1]*imgScale, img.shape[0]*imgScale
        res = cv2.resize(img,(int(newX),int(newY)))

        gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)

        # Find the chessboard corners
        pattern_found, corners = cv2.findChessboardCorners(gray, (nx,ny), None)

        # If found, add object points, image points (after refining them)
        if pattern_found is True:
            objpoints.append(objp)

            # Increase accuracy using subpixel corner refinement
            cv2.cornerSubPix(gray,corners,(5,5),(-1,-1),(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1 ))
            imgpoints.append(corners)

            if verbose:
                # Draw and display the corners
                draw = cv2.drawChessboardCorners(res, (nx, ny), corners, pattern_found)
                cv2.imshow('img',draw)
                cv2.waitKey(500)

    if verbose:
        cv2.destroyAllWindows()

    #Now we have our object points and image points, we are ready to go for calibration
    # Get the camera matrix, distortion coefficients, rotation and translation vectors
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
    print(mtx)
    print(dist)
    print('rvecs:', type(rvecs),' ',len(rvecs),' ',rvecs)
    print('tvecs:', type(tvecs),' ',len(tvecs),' ',tvecs)

    mean_error = 0
    for i in range(len(objpoints)):
        imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
        error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
        mean_error += error

    print("total error: ", mean_error/len(objpoints))


    imagePoints,jacobian = cv2.projectPoints(objpoints[0], rvecs[0], tvecs[0], mtx, dist)
    print('Image points: ',imagePoints)
like image 755
Pablo Gonzalez Avatar asked Mar 18 '19 06:03

Pablo Gonzalez


People also ask

How does OpenCV calculate distance?

Steps for Distance Estimation: Capture Reference Image: Measure the distance from the object(face) to the camera, capture a Reference image and note down the measured distance. Measure the object (face) width, make sure that measurement units are kept for reference image and object(face) width. Mine Reference Image.

How do you extract the height and weight of an image using OpenCV?

When working with OpenCV Python, images are stored in numpy ndarray. To get the image shape or size, use ndarray. shape to get the dimensions of the image. Then, you can use index on the dimensions variable to get width, height and number of channels for each pixel.


1 Answers

You are indeed right, and I think you should use solvePnP for this problem. (Read more on perspective-n-point problems here: https://en.wikipedia.org/wiki/Perspective-n-Point.)

The Python OpenCV solvePnP function takes the following parameters and returns an ouput rotation and output translation vector which converts the model coordinate system to the camera coordinate system.

cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec

In your case the imagePoints will be the corners of the chessboard so it would look something like:

ret, rvec, tvec = cv2.solvePnP(objpoints, corners, mtx, dist)

With the returned translation vector you can calculate the distance from the camera to the chessboard. The output translation from solvePnP is in the same units as specified in objectPoints.

Finally, you can compute the real distance from the tvec as the euclidean distance:

d = math.sqrt(tx*tx + ty*ty + tz*tz).
like image 89
Douglas Brion Avatar answered Nov 03 '22 10:11

Douglas Brion