Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Computing the distance between two points in polar coordinates using floating-point

When the coordinates of two points in the plane are provided in polar form as r1, a1 and r2, a2, where r1, r2, a1, a2 are floating-point numbers, and the goal is to compute the distance between the two points as a floating-point number, it may be tempting to use the mathematical formula below:

D = sqrt(r1*r1 + r2*r2 - 2*r1*r2*cos(a1-a2))

This formula can be found in this and several other answers on StackOverflow. The link to the source provided in the linked answer is dead, but you find this formula in a lot of mathematical resources, for instance this website.

As a floating-point formula, this formula has a number of undesirable properties, listed below by decreasing seriousness. In short, my question is: what is a better formula to compute the distance D in the conditions stated above?

A/ sqrt is applied to a negative number when r1 = r2 and r1*r1 is subnormal

When r1*r1 is subnormal and r1 = r2, the rounding of r1*r1 is different from the rounding of 2*r1*r2. At one extreme of this spectrum of counter-examples, r1*r1 is zero and 2*r1*r2 is nonzero. At the other extreme, r1*r1 is subnormal and gets rounded somewhat brutally downwards, and 2*r1*r2 is normal and gets rounded less brutally. Either way, the expression inside sqrt ends up being negative and the result of the floating-point formula for D is NaN.

Example values for double-precision:

  double r1 = sqrt(DBL_MIN * DBL_EPSILON) * sqrt(0.45);
  double r2 = r1;

Run it on Compiler Explorer.

B/ sqrt is applied to a negative number when r1 is close to r2 while different

When r1 and r2 are very close, the distance between the mathematical products r1*r1 and r1*r2 is very close to the distance between the mathematical products r1*r2 and r2*r2. When this common distance corresponds to a small, odd number of half-ULPs, the situation can arise that the floating-point multiplication r1*r1 rounds down, r1*r2 rounds up, and r2*r2 rounds down again. In these conditions, again, the commonly-seen formula takes the square root of a negative number.

Example values for double-precision:

  double r1 = 0x1.1c71c71c71d3fp0;
  double r2 = 0x1.1c71c71c71dcbp0;

Run it on Compiler Explorer.

C/ When r1 is close to r2, the subtraction magnifies the approximations that take place in the multiplications

This is in fact a more benign symptom of the same root causes for issues A/ and B/. When r1 is very close to r2, the phenomenon known as catastrophic cancellation occurs. The relative accuracy of the subtraction is terrible (for instance, with a1 = a2 and r1 close to r2, rounding during the multiplications can make the result of the subtraction 0.0 although the optimal answer, fabs(r1 - r2), was representable and infinitely more accurate in relative terms. Note that the absolute accuracy of the result is of the order of the ULP of r1 and r2, and this may still be fine.

D/ Overflow when the magnitude of r1 or r2 is too large

If only r1*r1 or r2*r2 overflows, the result is computed as +inf which may not be the best representable approximation of the mathematical distance.

If r1*r2 overflows, then the result of the subtraction is NaN, and the distance is therefore computed as NaN.


Issues A/ and B/ render meaningless a result that did not have to be, and can be solved by computing dq > 0 ? sqrt(dq) : 0 instead of dq. For inputs that caused them, this change makes the answer 0.0 instead. This result has an infinite relative error, as do the results for other inputs, because this does not solve issue C/.

Issue D/ can be solved by scaling if the programmer expects that the computation will be used in conditions that trigger it. For that matter, issue A/ could also be solved by scaling, but that would not solve issue B/.

It is likely that a full solution that solves all issues A-D will involve more computations. There may exist several sweet spots that solve only some of the issues, or that solve issue C/ more or less thoroughly, computing distances that are accurate to 10 ULP, or 3, or 1. Any solution that is improves over the starting point is worthy of an answer.

Guillaume Melquiond already pointed out off-site that the formula below is equivalent to the original in mathematical terms, and it obviously escapes issues A/ and B/ since the argument of the square root is a sum of nonnegative terms:

D = sqrt((r1-r2)*(r1-r2) + 2*r1*r2*(1 - cos(a2-a1)))

In this solution, the catastrophic cancellation occurs in 1 - cos(a2-a1), so some aspects of issue C/ remains (although the computation with this formula is optimal for a1=a2 and r1 close to r2, as then r1-r2 and cos(a2-a1) are exact). The situation with respect to issue D/ is improved but there remain cases where the results was representable as a finite value and the formula computes +inf.

like image 893
Pascal Cuoq Avatar asked Sep 09 '19 13:09

Pascal Cuoq


People also ask

What is the polar distance formula?

Recall the distance formula in polar coordinates is d² equals r1² plus r2² minus 2r1r2 times cosine of theta 2 minus theta 1.

How do you find the distance between two waypoints?

The distance formula is: √[(x₂ - x₁)² + (y₂ - y₁)²]. This works for any two points in 2D space with coordinates (x₁, y₁) for the first point and (x₂, y₂) for the second point.


3 Answers

Let b = (a1-a2)/2 
then using
cos( a1-a2) = 1 - 2*sin(b)*sin(b)
D = sqrt( (r1-r2)*(r1-r2) + 4*r1*r2*sin(b)*sin(b))

This at least gets rid of square roots of negative numbers but would still problems with overflow.

Perhaps

x = 2*sqrt(r1)*sqrt(r2)*sin(b)
D = hypot( r1-r2, x)

would resolve that?

like image 195
dmuir Avatar answered Nov 03 '22 10:11

dmuir


I'm not an expert on this sort of numerical analysis but wanted to point out that these days there are computer programs that try to give a more "stable" version of your formula with respect to floating-point issues. One of the better ones is the so called Herbie system: https://herbie.uwplse.org/

They do have a web-demo, and when I plug-in your equation, it comes up with:

https://herbie.uwplse.org/demo/9c74ffee9d36a7f50669c498f99c86d1c0b4c837.f316bcef3de0492d34dcdbc4c663eb04d00305c4/graph.html

The above link has a ton more info about the translation it recommends. In case the web-link disappears, here's a screenshot of the final formula it proposes:

enter image description here

You can also get the same output in LaTeX or C. It makes a claim that the error has been reduced from 31.5 to 18.5; though I'm not sure what those numbers precisely mean. They do have a short-tutorial to get you started: https://herbie.uwplse.org/doc/latest/tutorial.html

Hope this helps!

like image 33
alias Avatar answered Nov 03 '22 11:11

alias


The distance remains the same if we rotate the whole figure.
Using the pol2cart idea, we can have some variations like:

x1=r1 cos((a2-a1)/2),y1=-r1 sin((a2-a1)/2)
x2=r2 cos((a2-a1)/2),y2= r2 sin((a2-a1)/2)

dist = hypot((r2-r1) cos((a2-a1)/2),(r2+r1) sin((a2-a1)/2)))

If angles are unbounded rather than properly reduced, then it's necessary to develop further the sin/cos formulas making this formulation not so useful...
r2+r1 might also overflow, in which case we could apply simple scaling like 2*hypot((r1/2+r2/2)...)

like image 27
aka.nice Avatar answered Nov 03 '22 12:11

aka.nice