Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

I do *not* want correct rounding for function exp

The GCC implementation of the C mathematical library on Debian systems has apparently an (IEEE 754-2008)-compliant implementation of the function exp, implying that rounding shall always be correct:

(from Wikipedia) The IEEE floating point standard guarantees that add, subtract, multiply, divide, fused multiply–add, square root, and floating point remainder will give the correctly rounded result of the infinite precision operation. No such guarantee was given in the 1985 standard for more complex functions and they are typically only accurate to within the last bit at best. However, the 2008 standard guarantees that conforming implementations will give correctly rounded results which respect the active rounding mode; implementation of the functions, however, is optional.

It turns out that I am encountering a case where this feature is actually hindering, because the exact result of the exp function is often nearly exactly at the middle between two consecutive double values (1), and then the program carries plenty of several further computations, losing up to a factor 400 (!) in speed: this was actually the explanation to my (ill-asked :-S) Question #43530011.

(1) More precisely, this happens when the argument of exp turns out to be of the form (2 k + 1) × 2-53 with k a rather small integer (like 242 for instance). In particular, the computations involved by pow (1. + x, 0.5) tend to call exp with such an argument when x is of the order of magnitude of 2-44.

Since implementations of correct rounding can be so much time-consuming in certain circumstances, I guess that the developers will also have devised a way to get a slightly less precise result (say, only up to 0.6 ULP or something like this) in a time which is (roughly) bounded for every value of the argument in a given range… (2)

… But how to do this??

(2) What I mean is that I just do not want that some exceptional values of the argument like (2 k + 1) × 2-53 would be much more time-consuming than most values of the same order of magnitude; but of course I do not mind if some exceptional values of the argument go much faster, or if large arguments (in absolute value) need a larger computation time.

Here is a minimal program showing the phenomenon:

#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <time.h>

int main (void)
 {
  int i;
  double a, c;
  c = 0;
  clock_t start = clock ();
  for (i = 0; i < 1e6; ++i) // Doing a large number of times the same type of computation with different values, to smoothen random fluctuations.
   {
    a = (double) (1 + 2 * (rand () % 0x400)) / 0x20000000000000; // "a" has only a few significant digits, and its last non-zero digit is at (fixed-point) position 53.
    c += exp (a); // Just to be sure that the compiler will actually perform the computation of exp (a).
   }
  clock_t stop = clock ();
  printf ("%e\n", c); // Just to be sure that the compiler will actually perform the computation.
  printf ("Clock time spent: %d\n", stop - start);
  return 0;
 }

Now after gcc -std=c99 program53.c -lm -o program53:

$ ./program53
1.000000e+06
Clock time spent: 13470008
$ ./program53 
1.000000e+06
Clock time spent: 13292721
$ ./program53 
1.000000e+06
Clock time spent: 13201616

On the other hand, with program52 and program54 (got by replacing 0x20000000000000 by resp. 0x10000000000000 and 0x40000000000000):

$ ./program52
1.000000e+06
Clock time spent: 83594
$ ./program52
1.000000e+06
Clock time spent: 69095
$ ./program52
1.000000e+06
Clock time spent: 54694
$ ./program54
1.000000e+06
Clock time spent: 86151
$ ./program54
1.000000e+06
Clock time spent: 74209
$ ./program54
1.000000e+06
Clock time spent: 78612

Beware, the phenomenon is implementation-dependent! Apparently, among the common implementations, only those of the Debian systems (including Ubuntu) show this phenomenon.

P.-S.: I hope that my question is not a duplicate: I searched for a similar question thoroughly without success, but maybe I did note use the relevant keywords… :-/

like image 446
Rémi Peyre Avatar asked Jun 03 '17 16:06

Rémi Peyre


2 Answers

To answer the general question on why the library functions are required to give correctly rounded results:

Floating-point is hard, and often times counterintuitive. Not every programmer has read what they should have. When libraries used to allow some slightly inaccurate rounding, people complained about the precision of the library function when their inaccurate computations inevitably went wrong and produced nonsense. In response, the library writers made their libraries exactly rounded, so now people cannot shift the blame to them.

In many cases, specific knowledge about floating point algorithms can produce considerable improvements to accuracy and/or performance, like in the testcase:

Taking the exp() of numbers very close to 0 in floating-point numbers is problematic, since the result is a number close to 1 while all the precision is in the difference to one, so most significant digits are lost. It is more precise (and significantly faster in this testcase) to compute exp(x) - 1 through the C math library function expm1(x). If the exp() itself is really needed, it is still much faster to do expm1(x) + 1.

A similar concern exists for computing log(1 + x), for which there is the function log1p(x).

A quick fix that speeds up the provided testcase:

#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <time.h>

int main (void)
{
  int i;
  double a, c;
  c = 0;
  clock_t start = clock ();
  for (i = 0; i < 1e6; ++i) // Doing a large number of times the same type of computation with different values, to smoothen random fluctuations.
    {
      a = (double) (1 + 2 * (rand () % 0x400)) / 0x20000000000000; // "a" has only a few significant digits, and its last non-zero digit is at (fixed-point) position 53.
      c += expm1 (a) + 1; // replace exp() with expm1() + 1
    }
  clock_t stop = clock ();
  printf ("%e\n", c); // Just to be sure that the compiler will actually perform the computation.
  printf ("Clock time spent: %d\n", stop - start);
  return 0;
}

For this case, the timings on my machine are thus:

Original code

1.000000e+06

Clock time spent: 21543338

Modified code

1.000000e+06

Clock time spent: 55076

Programmers with advanced knowledge about the accompanying trade-offs may sometimes consider using approximate results where the precision is not critical

For an experienced programmer it may be possible to write an approximative implementation of a slow function using methods like Newton-Raphson, Taylor or Maclaurin polynomials, specifically inexactly rounded specialty functions from libraries like Intel's MKL, AMD's AMCL, relaxing the floating-point standard compliance of the compiler, reducing precision to ieee754 binary32 (float), or a combination of these.

Note that a better description of the problem would enable a better answer.

like image 75
EOF Avatar answered Nov 14 '22 01:11

EOF


This is an "answer"/followup to EOF's preceding comments re his trecu() algorithm and code for his "binary tree summation" suggestion. "Prerequisites" before reading this are reading that discussion. It would be nice to collect all that in one organized place, but I haven't done that yet...

...What I did do was build EOF's trecu() into the test program from the preceding answer that I'd written by modifying the OP's original test program. But then I found that trecu() generated exactly (and I mean exactly) the same answer as the "plain sum" c using exp(), not the sum cm1 using expm1() that we'd expected from a more accurate binary tree summation.

But that test program's a bit (maybe two bits:) "convoluted" (or, as EOF said, "unreadable"), so I wrote a separate smaller test program, given below (with example runs and discussion below that), to separately test/exercise trecu(). Moreover, I also wrote function bintreesum() into the code below, which abstracts/encapsulates the iterative code for binary tree summation that I'd embedded into the preceding test program. In that preceding case, my iterative code indeed came close to the cm1 answer, which is why I'd expected EOF's recursive trecu() to do the same. Long-and-short of it is that, below, same thing happens -- bintreesum() remains close to correct answer, while trecu() gets further away, exactly reproducing the "plain sum".

What we're summing below is just sum(i),i=1...n, which is just the well-known n(n+1)/2. But that's not quite right -- to reproduce OP's problem, summand is not sum(i) alone but rather sum(1+i*10^(-e)), where e can be given on the command-line. So for, say, n=5, you don't get 15 but rather 5.000...00015, or for n=6 you get 6.000...00021, etc. And to avoid a long, long format, I printf() sum-n to remove that integer part. Okay??? So here's the code...

/* Quoting from EOF's comment...
   What I (EOF) proposed is effectively a binary tree of additions:
   a+b+c+d+e+f+g+h as ((a+b)+(c+d))+((e+f)+(g+h)).
   Like this: Add adjacent pairs of elements, this produces
   a new sequence of n/2 elements.
   Recurse until only one element is left. */
#include <stdio.h>
#include <stdlib.h>

double trecu(double *vals, double sum, int n) {
  int midn = n/2;
  switch (n) {
    case  0: break;
    case  1: sum += *vals; break;
    default: sum = trecu(vals+midn, trecu(vals,sum,midn), n-midn); break; }
  return(sum);
  } /* --- end-of-function trecu() --- */

double bintreesum(double *vals, int n, int binsize) {
  double binsum = 0.0;
  int nbin0 = (n+(binsize-1))/binsize,
      nbin1 = (nbin0+(binsize-1))/binsize,
      nbins[2] = { nbin0, nbin1 };
  double *vbins[2] = {
            (double *)malloc(nbin0*sizeof(double)),
            (double *)malloc(nbin1*sizeof(double)) },
         *vbin0=vbins[0], *vbin1=vbins[1];
  int ibin=0, i;
  for ( i=0; i<nbin0; i++ ) vbin0[i] = 0.0;
  for ( i=0; i<n; i++ ) vbin0[i%nbin0] += vals[i];
  while ( nbins[ibin] > 1 ) {
    int jbin = 1-ibin;        /* other bin, 0<-->1 */
    nbins[jbin] = (nbins[ibin]+(binsize-1))/binsize;
    for ( i=0; i<nbins[jbin]; i++ ) vbins[jbin][i] = 0.0;
    for ( i=0; i<nbins[ibin]; i++ )
      vbins[jbin][i%nbins[jbin]] += vbins[ibin][i];
    ibin = jbin;              /* swap bins for next pass */
    } /* --- end-of-while(nbins[ibin]>0) --- */
  binsum = vbins[ibin][0];
  free((void *)vbins[0]);  free((void *)vbins[1]);
  return ( binsum );
  } /* --- end-of-function bintreesum() --- */

#if defined(TESTTRECU)
#include <math.h>
#define MAXN (2000000)
int main(int argc, char *argv[]) {
  int N       = (argc>1? atoi(argv[1]) : 1000000 ),
      e       = (argc>2? atoi(argv[2]) : -10 ),
      binsize = (argc>3? atoi(argv[3]) : 2 );
  double tens = pow(10.0,(double)e);
  double *vals = (double *)malloc(sizeof(double)*MAXN),
         sum = 0.0;
  double trecu(), bintreesum();
  int i;
  if ( N > MAXN ) N=MAXN;
  for ( i=0; i<N; i++ ) vals[i] = 1.0 + tens*(double)(i+1);
  for ( i=0; i<N; i++ ) sum += vals[i];
  printf(" N=%d, Sum_i=1^N {1.0 + i*%.1e} - N  =  %.8e,\n"
         "\t plain_sum-N  = %.8e,\n"
         "\t trecu-N      = %.8e,\n"
         "\t bintreesum-N = %.8e \n",
         N, tens, tens*((double)N)*((double)(N+1))/2.0,
          sum-(double)N,
         trecu(vals,0.0,N)-(double)N,
         bintreesum(vals,N,binsize)-(double)N );
  } /* --- end-of-function main() --- */
#endif

So if you save that as trecu.c, then compile it as cc –DTESTTRECU trecu.c –lm –o trecu And then run with zero to three optional command-line args as trecu #trials e binsize Defaults are #trials=1000000 (like OP's program), e=–10, and binsize=2 (for my bintreesum() function to do a binary-tree sum rather than larger-size bins).

And here are some test results illustrating the problem described above,

bash-4.3$ ./trecu              
 N=1000000, Sum_i=1^N {1.0 + i*1.0e-10} - N  =  5.00000500e+01,
         plain_sum-N  = 5.00000500e+01,
         trecu-N      = 5.00000500e+01,
         bintreesum-N = 5.00000500e+01 
bash-4.3$ ./trecu 1000000 -15
 N=1000000, Sum_i=1^N {1.0 + i*1.0e-15} - N  =  5.00000500e-04,
         plain_sum-N  = 5.01087168e-04,
         trecu-N      = 5.01087168e-04,
         bintreesum-N = 5.00000548e-04 
bash-4.3$ 
bash-4.3$ ./trecu 1000000 -16
 N=1000000, Sum_i=1^N {1.0 + i*1.0e-16} - N  =  5.00000500e-05,
         plain_sum-N  = 6.67552231e-05,
         trecu-N      = 6.67552231e-05,
         bintreesum-N = 5.00001479e-05 
bash-4.3$ 
bash-4.3$ ./trecu 1000000 -17
 N=1000000, Sum_i=1^N {1.0 + i*1.0e-17} - N  =  5.00000500e-06,
         plain_sum-N  = 0.00000000e+00,
         trecu-N      = 0.00000000e+00,
         bintreesum-N = 4.99992166e-06 

So you can see that for the default run, e=–10, everybody's doing everything right. That is, the top line that says "Sum" just does the n(n+1)/2 thing, so presumably displays the right answer. And everybody below that agrees for the default e=–10 test case. But for the e=–15 and e=–16 cases below that, trecu() exactly agrees with the plain_sum, while bintreesum stays pretty close to the right answer. And finally, for e=–17, plain_sum and trecu() have "disappeared", while bintreesum()'s still hanging in there pretty well.

So trecu()'s correctly doing the sum all right, but its recursion's apparently not doing that "binary tree" type of thing that my more straightforward iterative bintreesum()'s apparently doing correctly. And that indeed demonstrates that EOF's suggestion for "binary tree summation" realizes quite an improvement over the plain_sum for these 1+epsilon kind of cases. So we'd really like to see his trecu() recursion work!!! When I originally looked at it, I thought it did work. But that double-recursion (is there a special name for that?) in his default: case is apparently more confusing (at least to me:) than I thought. Like I said, it is doing the sum, but not the "binary tree" thing.

Okay, so who'd like to take on the challenge and explain what's going on in that trecu() recursion? And, maybe more importantly, fix it so it does what's intended. Thanks.

like image 1
John Forkosh Avatar answered Nov 14 '22 01:11

John Forkosh