Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV - Tilted camera and triangulation landmark for stereo vision

I am using a stereo system and so I am trying to get world coordinates of some points by triangulation.

My cameras present an angle, the Z axis direction (direction of the depth) is not normal to my surface. That is why when I observe flat surface, I get no constant depth but a "linear" variation, correct? And I want the depth from the baseline direction... How I can re-project?

enter image description here

A piece of my code with my projective arrays and triangulate function :

#C1 and C2 are the cameras matrix (left and rig)
#R_0 and T_0 are the transformation between cameras
#Coord1 and Coord2 are the correspondant coordinates of left and right respectively
P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1))))) 

P2 =np.dot(C2,np.hstack(((R_0),T_0)))

for i in range(Coord1.shape[0])
    z = cv2.triangulatePoints(P1, P2, Coord1[i,],Coord2[i,])

-------- EDIT LATER -----------

Thanks scribbleink, so i tried to apply your proposal. But i think i have a mistake because it doesnt work well as you can see below. And the point clouds seems to be warped and curved towards the edges of the image.

enter image description here

U, S, Vt = linalg.svd(F)
V = Vt.T

#Right epipol
U[:,2]/U[2,2]

# The expected X-direction with C1 camera matri and C1[0,0] the focal length
vecteurX = np.array([(U[:,2]/U[2,2])[0],(U[:,2]/U[2,2])[1],C1[0,0]])
vecteurX_unit = vecteurX/np.sqrt(vecteurX[0]**2 + vecteurX[1]**2 + vecteurX[2]**2)


# The expected Y axis :
height = 2048
vecteurY = np.array([0, height -1, 0])
vecteurY_unit = vecteurY/np.sqrt(vecteurY[0]**2 + vecteurY[1]**2 + vecteurY[2]**2)


# The expected Z direction :
vecteurZ = np.cross(vecteurX,vecteurY)
vecteurZ_unit = vecteurZ/np.sqrt(vecteurZ[0]**2 + vecteurZ[1]**2 + vecteurZ[2]**2)

#Normal of the Z optical (the current Z direction)
Zopitcal = np.array([0,0,1])

cos_theta = np.arccos(np.dot(vecteurZ_unit, Zopitcal)/np.sqrt(vecteurZ_unit[0]**2 + vecteurZ_unit[1]**2 + vecteurZ_unit[2]**2)*np.sqrt(Zopitcal[0]**2 + Zopitcal[1]**2 + Zopitcal[2]**2))

sin_theta = (np.cross(vecteurZ_unit, Zopitcal))[1]

#Definition of the Rodrigues vector and use of cv2.Rodrigues to get rotation matrix
v1 = Zopitcal  
v2 = vecteurZ_unit 

v_rodrigues = v1*cos_theta + (np.cross(v2,v1))*sin_theta + v2*(np.cross(v2,v1))*(1. - cos_theta)
R = cv2.Rodrigues(v_rodrigues)[0]
like image 282
user3601754 Avatar asked Mar 16 '16 11:03

user3601754


People also ask

What is triangulation in stereo vision?

• Triangulation - the principle underlying stereo vision - Binocular stereo vision determines the position of a point in space by finding the intersection of the two lines passing through the center of projection and the projec- tion of the point in each image.

What is stereo camera calibration?

Stereo calibration gives you the extrinsics, i.e. transformation matrices between cameras. That's for... stereo vision. If you don't perform stereo calibration, you would lack the extrinsics, and then you can't do any depth estimation at all, because that requires the extrinsics.

How does a stereo camera work?

A stereo camera closely copies how our eyes work to give us accurate, real-time depth perception. It achieves this by using two sensors a set distance apart to triangulate similar pixels from both 2D planes. Each pixel in a digital camera image collects light that reaches the camera along a 3D ray.


1 Answers

Your expected z direction is arbitrary to the reconstruction method. In general, you have a rotation matrix that rotates the left camera from your desired direction. You can easily build that matrix, R. Then all you need to do is to multiply your reconstructed points by the transpose of R.

like image 127
fireant Avatar answered Oct 05 '22 10:10

fireant