Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Robustly and accurately computing natural logarithm of quotient of two floating-point numbers

One obvious problem when computating log (a/b), where a and b are two non-zero positive finite floating-point operands of a given precision (here called the native precision), is that the quotient a/b may not be representable as a floating-point number in that precision. Furthermore, accuracy will be lost when the ratio of the source operands is close to unity.

This could potentially be worked around by temporarily switching to higher-precision computation. But such higher precision may not be readily available, for example when the native precision is double and long double simply maps to double. The use of higher precision computation could also have a very significant negative impact on performance, for example on GPUs where the throughput of float computation may be up to 32 times higher than the throughput of double computation.

One could decide to use the quotient rule of logarithms to compute log (a/b) as log(a) - log(b), but this exposes the computation to the risk of subtractive cancellation when a and b are close to each other, resulting in very large errors.

How can the logarithm of the quotient of two floating-point numbers of given precision be computed both accurately, e.g. with an error of less than 2 ulps and robustly, i.e. with no underflow and overflow in intermediate computation, without resorting to higher than native precision computation?

like image 851
njuffa Avatar asked Aug 06 '18 22:08

njuffa


1 Answers

The best approach I have identified so far distinguishes three cases that are based on the quotient of the larger source operand divided by the smaller source operand. This ratio tells us how far apart the operands are. If it is so large that it exceeds the native precision's maximum representable number, the quotient rule must be used, and the result is computed as log(a) - log(b). If the ratio is close to unity, the computation should take advantage of the function log1p() to improve accuracy, computing the result as log1p ((a - b) / b). The Sterbenz Lemma suggests that 2.0 is a good switchover point for this, since a-b will be computed exactly if the ratio is ≤ 2. For all the other cases, the direct computation log (a/b) can be used.

Below, I show the implementation of this design for a function accepting float arguments. The use of float makes it easier to assess the accuracy as this allows denser sampling of possible test cases. Obviously, overall accuracy will depend on the quality of the implementation of logf() and logpf() in the math library. Using a math library with functions that are almost correctly rounded (with a maximum error in logf() < 0.524 ulp, maximum error in log1pf() < 0.506 ulp), the maximum error observed in log_quotient() was < 1.5 ulps. Using a different library with faithfully-rounded implementations of the functions (with a maximum error in logf() < 0.851 ulp, maximum error in log1pf() < 0.874 ulp), the maximum error observed in log_quotient() was < 1.7 ulps.

#include <float.h>
#include <math.h>

/* Compute log (a/b) for a, b ∈ (0, ∞) accurately and robustly, i.e. avoiding
   underflow and overflow in intermediate computations. Using a math library 
   that provides log1pf() and logf() with a maximum error close to 0.5 ulps,
   the maximum observed error was 1.49351 ulp.
*/
float log_quotient (float a, float b)
{
    float ratio = fmaxf (a, b) / fminf (a, b);
    if (ratio > FLT_MAX) {
        return logf (a) - logf (b);
    } else if (ratio > 2.0f) {
        return logf (a / b);
    } else {
        return log1pf ((a - b) / b);
    }
}
like image 150
njuffa Avatar answered Nov 15 '22 11:11

njuffa