Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Machine Learning: sign visibility

I work at an airport where we need to determine the visibility conditions of pilots.

To do this, we have signs placed every 200 meters along the runway that allow us to determine how far the visibility is. We have multiple runways, and the visibility needs to be checked every hour.

Right now the visibility check is done manually with a human being who looks at the photos from the cameras placed at the end of each runway. So it can be tedious.

I'm a programmer who has very little experience with machine learning, but this sounds like an easy problem to automate. How should I approach this problem? Which algorithms should I study? Would OpenCV help me?

Thanks!

like image 472
LeeMobile Avatar asked Nov 23 '13 09:11

LeeMobile


2 Answers

I think this can be automated using computer vision techniques. openCV could make the implementation easier. If all the signs are similar then ,we can train our program to recognize the sign in a specific conditions(lights). Then, we can use the trained classifier to check for the visibility of signs every hours using a simple script. There is harr-like feature extraction already in openCV. You can use to train classifier which will output a .xml file and use that .xml file for detecting the sign regularly. I have done a similar project RTVTR(Real Time Vehicle Tracking and Recognition) using openCV and it worked great. http://www.youtube.com/watch?v=xJwBT76VEZ4

like image 122
Aadeshnpn Avatar answered Oct 27 '22 12:10

Aadeshnpn


Answering to your questions:

How should I approach this problem?

It depends on the result you want/need to obtain. Is this an "hobby" project (even if job-related) or do you need to build a machine vision system to solve the problem and should it be compliant with some regulations or standard?

Which algorithms should I study?

I am very interested in your question but I am not an expert in the field of meteorology and so searching in the relative literature is, for me, a time consuming task... so I reserve to update this part of the answer in the future. I think there will be different algorithms involved in the solution of the problem, some are very general like for example algorithms for the image segmentation, some are very specific like for example how to measure the visibility.

Update: one of the keyword for searching in the literature is Meteorological Visibility, for example

HAUTIERE, Nicolas, et al. Automatic fog detection and estimation of visibility distance through use of an onboard camera. Machine Vision and Applications, 2006, 17.1: 8-20.

LENOR, Stephan, et al. An Improved Model for Estimating the Meteorological Visibility from a Road Surface Luminance Curve. In: Pattern Recognition. Springer Berlin Heidelberg, 2013. p. 184-193.

Would OpenCV help me?

Yes, I think OpenCV can help giving you a starting point.

An idea for a naïve algorithm:

  1. Segment the image in order to get the pixel regions belonging to the signs and to the background.
  2. Compute the measure of visibility according to some procedure, the measure is computed by a function that has as input the regions of all the signs and the background region.

The segmentation can be simplified a lot if the signs are always in the same fixed and known position inside the image.

The measure of visibility is obviously the core of the algorithm and it can be performed in a lot of ways...

You can follow a simple approach where you compute the visibility with a mathematical formula based on the average gray level of the signs and background regions.

You can follow a more sophisticated and machine-learning oriented approach where you implement an algorithm that mimics your current human being based procedure. In this case your problem can be framed as a supervised learning task: you have a set of training examples, each training example is a pair composed by a) the photo of the runway (the input) and b) the visibility related to that photo and computed by human (the desired output). Then the system is trained by means of the training set and when you give a new photo as input it will give you back the visibility measure. I think you have a log for past visibility measures (METAR?) and if you saved the related images too, you will already have a relevant amount of data in order to build a training set and a test set.

Update in the age of Convolutional Neural Networks:

YOU, Yang, et al. Relative CNN-RNN: Learning Relative Atmospheric Visibility from Images. IEEE Transactions on Image Processing, 2018.

like image 38
Alessandro Jacopson Avatar answered Oct 27 '22 12:10

Alessandro Jacopson