Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use a Kalman Filter to predict gps positions in between meassurements

I have studied the OpenCV Kalman filter implementation and done some basic mouse pointer simulations and understand the basic. I seem to miss a few key points for using it in my application and was hoping someone here can provide a little example.

Using a simple model with velocity and position:

KF.statePre.at<float>(0) = mouse_info.x;
KF.statePre.at<float>(1) = mouse_info.y;
KF.statePre.at<float>(2) = 0;
KF.statePre.at<float>(3) = 0;
KF.transitionMatrix = (Mat_<float>(4, 4) << 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1);

setIdentity(KF.measurementMatrix);
setIdentity(KF.processNoiseCov, Scalar::all(1e-2));
setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));
setIdentity(KF.errorCovPost, Scalar::all(.1));

i can do a prediction

Mat prediction = KF.predict();

and i can do a correction

Mat estimated = KF.correct(measurement);

Doing this in a loop i dont fully understand what the meaning of prediction, estimate and measurement is.

Ofcause a measurement is a "truth" value meassured with some equitment. Lets say GPS latitude and longitude. I think this video shows some interesting ideas. https://www.youtube.com/watch?v=EQD0PH09Jvo. It is using a GPS measure unit that updates at 1hz and then it uses a kalman filter to predict the value at a 10 hz rate instead.

How would such a setup look? Is the following example the way to do it?

Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();

Mat estimated = KF.correct(measurement);
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat prediction = KF.predict();
Mat estimated = KF.correct(measurement);

I am unsure if one can predict 9 values and then as the 10th provide a measurement.

I am some logged data that I would like to test with. The logged data is gps data at 1hz in a file where each row is : timestamp:lat:lng and seperatly a series of events are logged at 15hz in a seperate file : timestamp: eventdata.

I would like to use the kalman filter to simulate the data gathering and predict a position of all the eventdata timestamps based on the 1hz measurements. Ofcause a full example of doing this with opencv would be nice, but just the starting pointers on the questions above with predict and correct is also acceptable.

update

I gave it a try.

int main()
{
    char * log = "C:/dev/sinnovations/opencv_examples/kalman_tracking/data/geosamples.txt";
    char * log1 = "C:/dev/sinnovations/opencv_examples/kalman_tracking/data/geosamples1.txt";

    ifstream myReadFile(log);
    ofstream myfile(log1);

    myfile.precision(15);

    KalmanFilter KF(4, 2, 0,CV_64F);
    Mat_<double> state(4, 1);
    Mat_<double> processNoise(4, 1, CV_64F);
    Mat_<double> measurement(2, 1); measurement.setTo(Scalar(0));

    KF.statePre.at<double>(0) = 0;
    KF.statePre.at<double>(1) = 0;
    KF.statePre.at<double>(2) = 0;
    KF.statePre.at<double>(3) = 0;

    KF.transitionMatrix = (Mat_<double>(4, 4) << 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1); // Including velocity
    KF.processNoiseCov = (cv::Mat_<double>(4, 4) << 0.2, 0, 0.2, 0, 0, 0.2, 0, 0.2, 0, 0, 0.3, 0, 0, 0, 0, 0.3);

    setIdentity(KF.measurementMatrix);
    setIdentity(KF.processNoiseCov, Scalar::all(1e-4));
    setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));
    setIdentity(KF.errorCovPost, Scalar::all(.1));

    std::vector < cv::Point3d >  data;
    while (!myReadFile.eof())
    {
        double  ms;
        double lat, lng, speed;
        myReadFile >> ms;
        myReadFile >> lat;
        myReadFile >> lng;
        myReadFile >> speed;
        data.push_back(cv::Point3d(ms, lat, lng));
    }

    std::cout << data.size() << std::endl;
    for (int i = 1, ii = data.size(); i < ii; ++i)
    {
        const cv::Point3d & last = data[i-1];
        const cv::Point3d & current = data[i];
        double steps = current.x - last.x;
        std::cout << "Time between Points:" << current.x - last.x << endl;
        std::cout << "Measurement:" << current << endl;

        measurement(0) = last.y;
        measurement(1) = last.z;
        
        Mat estimated = KF.correct(measurement);
        std::cout << "Estimated: " << estimated.t() << endl;
        Mat prediction = KF.predict();
        for (int j = 0; j < steps; j+=100){
            prediction = KF.predict();  
            myfile << (long)((last.x - data[0].x) + j) << " " << prediction.at<double>(0) << " " << prediction.at<double>(1) << endl;
        }       
        std::cout << "Prediction: " << prediction.t() << endl << endl;
    }

    return 0;
}

but there is missing something as the results can be seen in the picture. Red circles are logged value and blue line are predicted values for the first coordinate of the lat lng values:

I dont think the way I am handling the time values for observations and predicting values are correct.

enter image description here

like image 205
Poul K. Sørensen Avatar asked Sep 06 '14 16:09

Poul K. Sørensen


1 Answers

Let's first examine what you've said about the world in your model:

float dt = 1;
KF.transitionMatrix = (Mat_<float>(4, 4) << 1, 0, dt, 0,
                                            0, 1, 0, dt,
                                            0, 0, 1, 0,
                                            0, 0, 0, 1);

Here I've modified your transition matrix to be more explicit about the fact that you have coded a fixed timestep of dt = 1. In other words, x_next = x + dt * x_vel. A lot of explanations of the Kalman filter imply that F (the typical name for the transition matrix) is a constant. In this case you'd need to update the time-dependent terms of F any time your timestep changed.

setIdentity(KF.errorCovPost, Scalar::all(.1));

You've initialized the error covariance of your state vector to I * 0.1. That means the variance of each of your initial guesses is 0.1. Variance is written as "sigma squared" because it's the square of the standard deviation. We use variance here because of the nice property that the variance of the sum of two random variances is the sum of their variances (in short: variances add).

So, the standard deviation of each of your initial guesses is sqrt(0.1) ~= 0.3. So in all cases you are saying that 68% of the time your initial inputs are within +/-0.3 units and 95% of the time within +/-0.6 units. If that leap is confusing to you, go study the normal distribution.

Are those reasonable statements? You know your initial mouse position exactly (I assume) so your initial error covariance for x and y are probably closer to 0. Although being 95% confident that your initial position is within +/-0.6 pixels is pretty close. You are also saying that the mouse is moving at +/-0.6 pixels/dt. Let's assume your actual dt was something like 1/60th of a second. That means you're 95% confident that the mouse is moving at a rate slower than 0.6*60 = 36 pixels/sec, which takes around a minute to cross a typical monitor. So you're very confident that the initial mouse speed is very slow.

Why do these things matter so much? Why did I spend so much time on them? Because the only "magic" of the Kalman filter is that it's going to weight your measurements against your predicted state based on the proportion of error in each. Your Kalman filter is only as good as your error estimates.

You could improve your variance estimate for the mouse velocity very easily: Just log a bunch of typical mouse movement and compute the variance of the velocities (it's trivial: it's just the average of the squared differences from the mean. In this case, since you are forcing the mean to 0 in your state initialization, it makes sense to force that assumption on your measurement, so you'd just be taking the average of the squared velocities over your sample period).

setIdentity(KF.processNoiseCov, Scalar::all(1e-2));

Here again you have initialized a diagonal matrix. Let's evaluate your claim: Having ignored the actual mouse position for a while your belief about the system has eroded. Variances add (see above) so the Kalman filter can just add in the processNoiseCov (usually Q) to represent that erosion. So after taking into account your knowledge of mouse movement due to velocity, you're 95% sure that your predicted position is within +/-2 * sqrt(1e-2) = +/-0.2 pixels. So not knowing anything about what the user has done (yet: we're not at the measurement step yet) we are pretty damn sure about our constant velocity model. And since the same logic holds about being 95% sure velocity is unchanged within 0.2pixels/dt (which might be smoother mouse motion than is physically possible) you are telling the Kalman filter that you are really sure that the model is right, and it should be trusted a lot.

setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));

I'm not going to break this down other than to point out that:

  1. Your noise covariance for your actual measured mouse position is higher than your noise covariance for the process. In other words, you believe the mouse moves in a straight (estimated) line more than you believe the mouse positions you're putting in.
  2. Your belief in the accuracy of your actual mouse positions is not as strong as it should be: You're not using something "fuzzy" like GPS or camera tracking. You're given the actual noise free mouse coordinate. Your true measurementNoiseCovariance (typically R) is 0.

Now what is the effect of setting R=0? It means that the proportion of error in the position at the KF.correct(measurement) step is going to be 100% attributed to your model. So K (the Kalman gain) will take 100% of the measurement and replace your x, y mouse position with the new value. That is essentially correct, though. If you replace mouse with GPS then your R != 0 and you will mix some new state with some old state. But in your actual system (unless you intentionally inject simulated noise) you know your actual mouse position exactly. If you add simulated noise you can put its exact variance into R and you will get optimal state estimation as promised.

So taking 100% of the measurement sounds pretty boring in terms of state estimation. But one of the magic bits of the Kalman filter is that knowing the exact position will back-propagate into the other terms (in this case, estimated velocity) because the filter knows your transitionMatrix (aka F) so it knows how the bad velocity state influenced the (now known bad) position state.

Now to address your specific question:

How would such a setup look? Is the following example the way to do it?

It appears to work for that particular implementation. Normally you run a Kalman filter in a loop of predict + correct (usually called "update"). The final output state is actually the posterior estimation from after the correction. In some implementations the predict function might not update the final state at all, which would break your loop.

Someone else already pointed out your double predict.

What you will find in the literature is that your approach isn't very commonly described. This is mainly because a huge amount of work was done on state estimation in the 60s and 70s, when the cost of running the filter at the fast rate was too much. Instead, the idea was to update the filter only at the slow rate (1 Hz in your example). This is done by using a state vector of errors in position (and velocity and acceleration, in high dynamic systems) and only doing the predict + update step at the slow rate, meanwhile using the predicted error corrections continuously at the fast rate. This form is called an Indirect Kalman Filter. You will also find papers arguing that this is archaic, cargo-cult programming which gives slightly inferior results to the direct approach. Even if true, it doesn't change the fact that any google search on the topic of position tracking is very likely to use the indirect form, which is harder to understand.

like image 175
Ben Jackson Avatar answered Sep 30 '22 18:09

Ben Jackson