Do the sensor fusion algorithms of Core Motion take advantage of the Kalman filter?
What are Sensor Fusion Algorithms? Sensor fusion algorithms combine sensory data that, when properly synthesized, help reduce uncertainty in machine perception. They take on the task of combining data from multiple sensors — each with unique pros and cons — to determine the most accurate positions of objects.
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.
Update on June 22, 2016
According to the documentation provided by Apple,
The processed device-motion data provided by Core Motion’s sensor fusion algorithms gives the device’s attitude, rotation rate, calibrated magnetic fields, the direction of gravity, and the acceleration the user is imparting to the device.
That is, some sort of sensor fusion algorithm has been provided by now. I cannot tell from this piece of information whether that sensor fusion algorithm is the Kalman filter, or something equally good.
(Answer back from 2011.)
No, according to this post.
Update on Aug 20, 2011
It is not clear from Apple's documentation (note: this linked pages has disappeared by June 22, 2016) what is actually provided. It does not seem to be a Kalman filter implementation.
According to Kay (who sent me the above link) his own Kalman filter provides better results.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With