I'm trying to auto-detect the axis of rotation on a 3d pointcloud.
In other words, if I took a small 3d pointcloud, chose a single axis of rotation, and make several copies of the points at different rotation angles, then I get a larger pointcloud.
The input to my algorithm is the larger pointcloud, and the desired output is the single axis of symmetry. And eventually I'm going to compute the correspondences between points that are rotations of each other.
The size of the larger pointcloud is on the order of 100K points, and the number of rotational copies made is unknown.
The rotation angles in my case have constant deltas, but don't necessarily span 360 degrees. For example, I might have 0, 20, 40, 60. Or I might have 0, 90, 180, 270. But I won't have 0, 13, 78, 212 (or if I do, I don't care to detect it).
This seems like a computer vision problem, but I'm having trouble figuring out how to precisely find the axis. The input will generally be very clean, close to float accuracy.
I don't have the original smaller pointcloud that was rotated/copied to make the larger pointcloud. I do know that the data is synthetic with very little noise (it's generally the output of another program).
We can't readily compute the possible numbers of points in the smaller cloud, because right along the axis the points aren't duplicated, unfortunately. If we knew which points were along the axis then we could come up with possible factors, but then we'd have already solved the problem.
--
Thanks everyone for your suggestions. It looks like my final algorithm is going to try to come up with cliques of matching points using a k-nn metric. Each clique will give an axis. I can then use RANSAC to fit an axis to the results of all the cliques.
Well, the following approach might be useful, but it depends on the specific of your data. It is based on the assumptions that the gap between the neighbouring positions are big enough (20 degrees is probably fine) and the small point cloud approximates a surface (the last one could be overcome). I suggest using local feature matching (the popular technique in computer vision).
First, for each point of the large cloud you should compute local descriptors (like SIFT or SURF for images). The most popular one for point clouds is spin image:
Johnson, A., & Hebert, M. (1999). Using spin images for efficient object recognition in cluttered 3 d scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5), 433–449. Citeseer. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.8816&rep=rep1&type=pdf.
The advanced modification is used here:
Endres, F., Plagemann, C., Stachniss, C., & Burgard, W. (2009). Unsupervised Discovery of Object Classes from Range Data using Latent Dirichlet Allocation. In Robotics: Science and Systems. Seattle, USA.
If it will be computationally hard, ask me how to reduce the dimensionality without loosing a lot in discriminative power, I've done that once.
Then you should match the descriptors, i.e. to find the nearest neighbours for each of them in their high-dimensional space. If the small cloud has been rotated 3 times, there should be 3 good nearest neighbours. However, because of the cloud self-intersections the matches will probably contain noise. You still have to find the axis that fits well for a big amount of matches (though not all of them). Here you can use some robust fitting like RANSAC (you should do some math to define the likelihood of an axis position w.r.t. the found matches). Note that it differs from the methods that the others suggested. You should fit just one line instead of the family of planes, based on the descriptors, not the original points (RANSAC will probably fail to fit a plane having 4-5 correct points and 100K outliers).
Also note:
If you have the small scan that does not approximate a surface, you should come up with a different rotation-invariant descriptor, not spin images.
In order to compute normals and do the retrieval, you might check out this library: http://graphics.cs.msu.ru/en/science/research/3dpoint/lidark (a major release is coming soon).
The following assumes that there are 3 or more copies. Consider the convex hull of the large pointcloud. Find two of its faces that are parallel. The rotation axis will be perpendicular to these. If you find more than one pair, just test each orientation.
Obviously, this doesn't work if the most extreme points according to the axis are right on the axis, however in the general case, this is very rare.
Pick any point, and find two other points that are equidistant from it. This should take O(n^2) worst-case, but heuristics can greatly reduce this. These three points uniquely determine a circle. If there is a fourth point at the same distance from either the first or the third points, that greatly increases your confidence in the circle.
The center of the circle is a point on the axis, and the normal vector to the circle is the directional vector of the axis.
Given the axis, you can determine the angle between rotations and check your guess with some other points. If it's wrong, pick another point and start over.
Edit: By the way, obviously this is non-deterministic, but if your point data is as clean as you say and you use good heuristics, it should be quite accurate.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With