I'm writing an app that can detect lanes in a driving simulator. The environment is relatively simple, its mostly straight multi-lane roads and almost no curvature at all. At the moment, I can successfully detect lines using the (classical) Hough Transform but the issue is that the HT naturally also detects lines that are not lanes.
How can I be more selective? I do not draw horizontal lines already, but still some lines creep in. Ideally, I would like to detect the lane boundaries that the vehicle is traveling in. The following is a typical image of the environment
Here is what I'm doing so far:
The reason for threshold the imaging is as follows. If you take a look at the environment photograph linked above, you'll see a grayish line running parallel to the road. Because its a continuous line - unlike the lane markers - the HT ends up detecting it. I cannot exclude it based on gradient as it has the same gradient as the lane markers. With thresholding, i can remove that and therefore only detect lines that are the actual lane markers.
Here is the result of the above operations
I understand that there are many solutions to this problem and I have read countless papers on this but they all seem to be handling environments vastly more complicated than this and/or are simply way over my head. For what its worth, just a little more than a month ago, I had no background in ComputerVision and so all of this is very very new to me.
UPDATE 1:
I guess to put this in better terms, I'm looking for a way to model the lanes so that lines that do not fit the model are not included. Unfortunately, I do not have a clue about where to begin with models. Any suggestions?
For what its worth, I have managed to identify the lanes that the vehicle is traveling within and can exclude the extra lines that are not part of the "active" lane, so to speak. Hopefully this photo will help
Its not perfect, but its something I guess. My ultimate goal, after modeling, is to generate a heading/position of the vehicle. But I just want to get, relatively, robust lane detection at first. I'm hoping there is a relatively simple technique that can help achieve this (something that does not depend on the system's parameters such as focal length of field of view).
Lane detection is the task of detecting lanes on a road from a camera.
Lane detection usually requires the use of relevant algorithms to extract the pixel features of the lane line, and then the appropriate pixel fitting algorithm is used to complete the lane detection.
We have developed a LiDAR based lane detection which is able to detect the ego and side lanes. The system integrates several frames from different 3D LiDARs in a local grid map using the vehicle motion model. The map is infrared reflectance image of the environment around the car.
Based on the driving lane, determining an effective driving direction for the smart car and providing the accurate position of the vehicle in the lane are possible; these features contribute significantly towards improving the efficiency and driving safety of automatic driving [6,7].
One way to go would be to use prior knowledge of the scene you are looking at. You could have a model with a hidden state, comprising more or less static parameters such as camera height, camera tilt or lane width, and dynamic parameters such as camera yaw, lateral displacement of the camera within the lane, road curvature, etc. You could handle such model in the frame of a Kalman filter. An advantage of such a model would be an ability to tolerate other road surface markings such as direction arrows, zebras and such. Good luck!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With