Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV detect tennis court lines behind net

I'm trying to implement a tennis court detector using video recorded from the phone. I filmed it from the far corner of the tennis court.

The original image is like this.

Original Image

Using OpenCV Canny Edge Detection and Hough Lines transformation, I'm able to detect the lines in my own half, but not the ones behind the net. How can I improve this process and get the undetected court lines?

The processed image is as below.

Processed Image


Updated on 2016-08-25

Thanks guys. I understand it makes sense to derive the court lines by fitting the detected lines to the model lines. I am not going to try combinatorial search to find the best lines to fit models. Therefore, I have been trying to separate the horizontal/vertical lines in order to reduce the computational complexity. I tried RANSAC in order to find vanishing points (VP) that associate two different groups of lines, but failed probably because of detection error(?).

The scatter plot of the line parameters in polar coordinates is as below. It is basically to classify the points into two groups: the top points that form a horizontal line; the down left points that also form a line with deep slope. Is there anyway to do that? Thanks

Polar Coord

like image 215
Yeqing Zhang Avatar asked Aug 21 '16 18:08

Yeqing Zhang


2 Answers

You don't need to detect the lines behind the net. You know the ground is a flat plane, you know the dimensions of each side of the court are the same - so you only need to detect the nearby lines and you can calculate where the missing lines are.

In fact you really only need to detect a single corner if you know the characteristics of the camera+lens.

like image 118
Martin Beckett Avatar answered Nov 03 '22 05:11

Martin Beckett


In addition to Martin's comments, you might try using some kind of blur on the image before running your edge/line detection. With some tuning, you should be able to remove the signal of the net and maintain the court lines.

Another approach would be to reduce the thick lines to a single pixel by scanning the image left to right (for example) to detect transitions from red/green to white and back to red/green again. When this occurs, you can estimate that the midpoint of those two transitions is the midpoint of a court line. This would give you data you could feed directly into your Hough transform. This of course requires you to classify individual pixels as either court or line, which it seems like you aren't currently doing. This process can also be performed top-to-bottom to produce a second set of midpoint estimates.

like image 2
Drew Noakes Avatar answered Nov 03 '22 07:11

Drew Noakes