I am struggling and need help.
I want to compute optical flow velocity from the known motion of real-world object (actually camera is moving). This is part of what I have asked in my previous question (Determining if a feature is part of a moving object from sparse optical flow (KLT)).
Anyway, I have done computing optical flow using cvGoodFeaturesToTrack()
and cvCalcOpticalFlowPyrLK()
.
I just want to check if the flow I computed is theoretically correct (corresponding to the motion of camera).
Let my camera move only in Z axis (simply ignore yaw-rate for now). Assume my camera move for Vz (in Z direction).
I can find the optical flow by
vx = x * Vz / Z
vy = y * Vz / Z
(assume Vx,Vy = 0 --> no camera motion in x and y axis)
This is what I have studied mainly from http://www.cse.psu.edu/~rcollins/CSE486/lecture22_6pp.pdf.
The problem is to solve this I have to have Z. In my case, I cannot assume surface Z to be flat or known. Camera is moving on road and directing perpendicular to the ground.
Please anyone help me answer the questions below:
Thank you very much.
[If you find this question to be too vague, please let me know so that I can give more detail.]
where Vx=u=dx/dt V x = u = d x / d t denotes the movement of x over time and Vy=v=dy/dt V y = v = d y / d t denotes the movement of y over time. Solving for the two variables completes the optical flow problem.
The apparent motion of the brightness patterns is called as optical flow. • The optical flow is a field of 2D vectors and is. defined on the image domain, i.e. at each pixel (x,y) in the image, there is a vector (u(x,y),v(x,y)) giving the apparent displacement at (x,y) per unit time.
Optical flow was used by robotics researchers in many areas such as: object detection and tracking, image dominant plane extraction, movement detection, robot navigation and visual odometry. Optical flow information has been recognized as being useful for controlling micro air vehicles.
Perhaps this could help... Video lectures from University of Central Florida Computer Vision Group:
Additional python codes from Jan Erik Solem: Programming Computer Vision with Python.
Read chapter 10.4, it will most probably answer all your questions.
Also look at chapter 5.4 of that book, if you take an image with the camera and then move your camera slightly in the x-direction and take another image you can calculate something called a "disparity map" using the two images which tells what kind of things in the image were in the front vs back. this is slightly like figuring out the z-direction. something along the lines of what you have already tried and what some of the comments had mentioned about stereo imaging.
Chapter 4.3 explains pose estimation using planar markers. You can use object placed in front of the camera at a known distance to calibrate the camera. This is most probably what you should look at first.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With