I have video taken from car. My program is measuring distance between front wheel and white line on road. This script is running well for left side video and right side video.
But Some times it measures the wrong distance between the front wheel and white line for the right side.
thresh = 150
distance_of_wood_plank = 80
pixel_of_wood_plank = 150
origin_width = 0
origin_height = 0
wheel_x = 0; wheel_y = 0 #xpoint and ypoint of wheel
df = pandas.DataFrame(columns=["Frame_No", "Distance", "TimeStrap"])
cap = cv2.VideoCapture(args.video)
frame_count = 0;
while(cap.isOpened()): #Reading input video by VideoCapture of Opencv
try:
frame_count += 1
ret, source = cap.read() # get frame from video
origin_height, origin_width, channels = source.shape
timestamps = [cap.get(cv2.CAP_PROP_POS_MSEC)]
milisecond = int(timestamps[0]) / 1000
current_time = str(datetime.timedelta(seconds = milisecond))
cv2.waitKey(1)
grayImage = cv2.cvtColor(source, cv2.COLOR_RGB2GRAY) # get gray image
crop_y = int(origin_height / 3 * 2) - 30
crop_img = grayImage[crop_y:crop_y + 100, 0:0 + origin_width] # get interest area
blur_image = cv2.blur(crop_img,(3,3))
ret, th_wheel = cv2.threshold(blur_image, 10, 255, cv2.THRESH_BINARY) #get only wheel
ret, th_line = cv2.threshold(blur_image, 150, 255, cv2.THRESH_BINARY) #get only white line
contours, hierarchy = cv2.findContours(th_wheel, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2:]
# get xpoint and ypoint of wheel
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
if (x < origin_width/ 4):
continue
elif (w < 10):
continue
elif (w > 80):
continue
elif (x > origin_width / 4 * 3):
continue
wheel_x = int(x)
wheel_y = int(y + h / 2 - 8)
pixel_count = 0 # count of pixel between wheel and white line
# get distance between wheel and white line
if (wheel_x > origin_width/2):
wheel_x -= 7
for i in range(wheel_x, 0, -1):
pixel_count += 1
suit_point = th_line[wheel_y,i]
if (suit_point == 255):
break
if (i == 1):
pixel_count = 0
pixel_count -= 4
cv2.line(source, (wheel_x - pixel_count, wheel_y + crop_y), (wheel_x, wheel_y + crop_y), (255, 0, 0), 2)
else :
wheel_x += 7
for i in range(wheel_x , origin_width):
pixel_count += 1
suit_point = th_line[wheel_y,i]
if (suit_point == 255):
break
if (i == origin_width - 1):
pixel_count = 0
pixel_count += 4
cv2.line(source, (wheel_x, wheel_y + crop_y), (wheel_x + pixel_count, wheel_y + crop_y), (255, 0, 0), 2)
distance_Cm = int(pixel_count * 80 / pixel_of_wood_plank)
str_distance = ""
if distance_Cm > 10:
str_distance = str(distance_Cm) + "Cm"
else:
str_distance = "No white line"
cv2.putText(source, str_distance, (50, 250), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2, cv2.LINE_AA)
df = df.append({'Frame_No': frame_count,'Distance': str_distance ,'TimeStrap': current_time}, ignore_index = True)
df.to_csv("result.csv")
cv2.imshow("Distance_window", source)
cv2.waitKey(1)
except:
pass
Here is the link for the video - https://drive.google.com/file/d/1IjJ-FA2LTGv8Cz-ReL7fFI7HPTiEhyxF/view?usp=sharing
It is called magnitude() and it calculates the distance for you. And if you have a vector of more than 4 vectors to calculate distances, it will use SSE (i think) to make it faster.
The formula: distance = size_obj * focal_length / size_obj_on_sensor. The whole method depends on figuring out the size of the object as it appears on the sensor given the focal length and the measured object size. Otherwise you have two unknowns.
Computing the distance between objects is very similar to computing the size of objects in an image — it all starts with the reference object. As detailed in our previous blog post, our reference object should have two important properties:
If I understood your question correctly and since you said nothing about camera calibrations, if you just want to find the distance between those two objects then simply Find the respective contours of your masked objects. Python, C++ Calculate the moment of the detected contours.
Notice how the two quarters in the image are perfectly parallel to each other, implying that the distance between all five control points is 6.1 inches. Below follows a second example, this time computing the distance between our reference object and a set of pills: Figure 3: Computing the distance between pills using OpenCV.
I mean, measure the height and width in meters, for example, of the whole area covered in the image, having that and the height and width in pixels of the image on screen, couldn’t it be used to calculate distance between objects on image/video? Yes, you could do that as well but an intrinsic/extrinsic camera calibration would work best there.
You are actually doing a really good job at measuring the distance between the tire and the white line. You need to consider is how much noise you have in your samples. Unless you stop the truck, get out, and measure the distance from the tire to the line with a tape, you are never really going to know how far it is. You also need to consider that (unless you wreck the truck) the distance from the tire to the white line won't vary by more than a few pixels between each frame.
The best solution would be a Kalman filter, but that is pretty complex. I used a more simple solution. To find the line position, I averaged the last four values to reduce noise.
import numpy as np, cv2
thresh = 150
distance_of_wood_plank = 80
pixel_of_wood_plank = 150
origin_width = 0
origin_height = 0
wheel_x = 0; wheel_y = 0 #xpoint and ypoint of wheel
cap = cv2.VideoCapture('/home/stephen/Desktop/20180301 1100 VW Right.mp4')
frame_count = 0;
vid_writer = cv2.VideoWriter('/home/stephen/Desktop/writer.avi', cv2.VideoWriter_fourcc('M','J','P','G'), 30, (480,360))
positions = []
import math
def distance(a,b): return math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
while(cap.isOpened()): #Reading input video by VideoCapture of Opencv
frame_count += 1
ret, source = cap.read() # get frame from video
origin_height, origin_width, channels = source.shape
grayImage = cv2.cvtColor(source, cv2.COLOR_RGB2GRAY) # get gray image
crop_y = int(origin_height / 3 * 2) - 30
crop_img = grayImage[crop_y:crop_y + 100, 0:0 + origin_width] # get interest area
blur_image = cv2.blur(crop_img,(3,3))
ret, th_wheel = cv2.threshold(blur_image, 10, 255, cv2.THRESH_BINARY) #get only wheel
ret, th_line = cv2.threshold(blur_image, 150, 255, cv2.THRESH_BINARY) #get only white line
contours, hierarchy = cv2.findContours(th_wheel, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2:]
# get xpoint and ypoint of wheel
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
if (x < origin_width/ 4):
continue
elif (w < 10):
continue
elif (w > 80):
continue
elif (x > origin_width / 4 * 3):
continue
wheel_x = int(x)
wheel_y = int(y + h / 2 - 8)
pixel_count = 0 # count of pixel between wheel and white line
# get distance between wheel and white line
if (wheel_x > origin_width/2):
wheel_x -= 7
for i in range(wheel_x, 0, -1):
pixel_count += 1
suit_point = th_line[wheel_y,i]
if (suit_point == 255):
break
if (i == 1):
pixel_count = 0
pixel_count -= 4
else :
wheel_x += 7
for i in range(wheel_x , origin_width):
pixel_count += 1
suit_point = th_line[wheel_y,i]
if (suit_point == 255):
break
if (i == origin_width - 1):
pixel_count = 0
pixel_count += 4
a,b = (wheel_x - pixel_count, wheel_y + crop_y), (wheel_x, wheel_y + crop_y)
if distance(a,b)>10: positions.append((wheel_x + pixel_count, wheel_y + crop_y))
if len(positions)>10:
radius = 2
for position in positions[-10:]:
radius += 2
center = tuple(np.array(position, int))
color = 255,255,0
cv2.circle(source, center, radius, color, -1)
x,y = zip(*positions[-4:])
xa, ya = np.average(x), np.average(y)
center = int(xa), int(ya)
cv2.circle(source, center, 20, (0,0,255), 10)
cv2.imshow("Distance_window", source)
vid_writer.write(cv2.resize(source, (480,360)))
k = cv2.waitKey(1)
if k == 27: break
cv2.destroyAllWindows()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With