Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Assurance of ICP, internal Metrics

So I have an iterative closest point (ICP) algorithm that has been written and will fit a model to a point cloud. As a quick tutorial for those not in the know ICP is a simple algorithm that fits points to a model ultimately providing a homogeneous transform matrix between the model and points.

Here is a quick picture tutorial.

Step 1. Find the closest point in the model set to your data set:

Step 2: Using a bunch of fun maths (sometimes based on gradiant descent or SVD) pull the clouds closer together and repeat untill a pose is formed:

![Figure 2][2]

Now that bit is simple and working, what i would like help with is: How do I tell if the pose that I have is a good one?

So currently I have two ideas, but they are kind of hacky:

  1. How many points are in the ICP Algorithm. Ie, if I am fitting to almost no points, I assume that the pose will be bad:

    But what if the pose is actually good? It could be, even with few points. I dont want to reject good poses:

Figure 5

So what we see here is that low points can actually make a very good position if they are in the right place.

So the other metric investigated was the ratio of the supplied points to the used points. Here's an example

Figure 6

Now we exlude points that are too far away because they will be outliers, now this means we need a good starting position for the ICP to work, but i am ok with that. Now in the above example the assurance will say NO, this is a bad pose, and it would be right because the ratio of points vs points included is:

2/11 < SOME_THRESHOLD

So thats good, but it will fail in the case shown above where the triangle is upside down. It will say that the upside down triangle is good because all of the points are used by ICP.

You don't need to be an expert on ICP to answer this question, i am looking for good ideas. Using knowledge of the points how can we classify whether it is a good pose solution or not?

Using both of these solutions together in tandem is a good suggestion but its a pretty lame solution if you ask me, very dumb to just threshold it.

What are some good ideas for how to do this?

PS. If you want to add some code, please go for it. I am working in c++.

PPS. Someone help me with tagging this question I am not sure where it should fall.

like image 291
Fantastic Mr Fox Avatar asked Nov 28 '12 04:11

Fantastic Mr Fox


2 Answers

One possible approach might be comparing poses by their shapes and their orientation.

Shapes comparison can be done with Hausdorff distance up to isometry, that is poses are of the same shape if

d(I(actual_pose), calculated_pose) < d_threshold

where d_threshold should be found from experiments. As isometric modifications of X I would consider rotations by different angles - seems to be sufficient in this case.

Is poses have the same shape, we should compare their orientation. To compare orientation we could use somewhat simplified Freksa model. For each pose we should calculate values

{x_y min, x_y max, x_z min, x_z max, y_z min, y_z max}

and then make sure that each difference between corresponding values for poses does not break another_threshold, derived from experiments as well.

Hopefully this makes some sense, or at least you can draw something useful for your purpose from this.

like image 155
Andrei Avatar answered Nov 15 '22 22:11

Andrei


ICP attempts to minimize the distance between your point-cloud and a model, yes? Wouldn't it make the most sense to evaluate it based on what that distance actually is after execution?

I'm assuming it tries to minimize the sum of squared distances between each point you try to fit and the closest model point. So if you want a metric for quality, why not just normalize that sum, dividing by the number of points it's fitting. Yes, outliers will disrupt it somewhat but they're also going to disrupt your fit somewhat.

It seems like any calculation you can come up with that provides more insight than whatever ICP is minimizing would be more useful incorporated into the algorithm itself, so it can minimize that too. =)

Update

I think I didn't quite understand the algorithm. It seems that it iteratively selects a subset of points, transforms them to minimize error, and then repeats those two steps? In that case your ideal solution selects as many points as possible while keeping error as small as possible.

You said combining the two terms seemed like a weak solution, but it sounds to me like an exact description of what you want, and it captures the two major features of the algorithm (yes?). Evaluating using something like error + B * (selected / total) seems spiritually similar to how regularization is used to address the overfitting problem with gradient descent (and similar) ML algorithms. Selecting a good value for B would take some experimentation.

like image 30
FoolishSeth Avatar answered Nov 16 '22 00:11

FoolishSeth