Background: I'm writing some geometry software in Java. I need the precision offered by Java's BigDecimal class. Since BigDecimal doesn't have support for trig functions, I thought I'd take a look at how Java implements the standard Math library methods and write my own version with BigDecimal support.
Reading this JavaDoc, I learned that Java uses algorithms "from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic."
My Question: I looked up fblibm's sin function, k_sin.c, and it looks like they use a Taylor series of order 13 to approximate sine (edit - njuffa commented that fdlibm uses a minimax polynomial approximation). The code defines the coefficients of the polynomial as S1 through S6. I decided to check the values of these coefficients, and found that S6 is only correct to one significant digit! I would expect it to be 1/(13!), which Windows Calculator and Google Calc tell me is 1.6059044...e-10, not 1.58969099521155010221e-10 (which is the value for S6 in the code). Even S5 differs in the fifth digit from 1/(11!). Can someone explain this discrepancy? Specifically, how are those coefficients (S1 through S6) determined?
/* @(#)k_sin.c 1.3 95/01/18 */
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunSoft, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
/* __kernel_sin( x, y, iy)
* kernel sin function on [-pi/4, pi/4], pi/4 ~ 0.7854
* Input x is assumed to be bounded by ~pi/4 in magnitude.
* Input y is the tail of x.
* Input iy indicates whether y is 0. (if iy=0, y assume to be 0).
*
* Algorithm
* 1. Since sin(-x) = -sin(x), we need only to consider positive x.
* 2. if x < 2^-27 (hx<0x3e400000 0), return x with inexact if x!=0.
* 3. sin(x) is approximated by a polynomial of degree 13 on
* [0,pi/4]
* 3 13
* sin(x) ~ x + S1*x + ... + S6*x
* where
*
* |sin(x) 2 4 6 8 10 12 | -58
* |----- - (1+S1*x +S2*x +S3*x +S4*x +S5*x +S6*x )| <= 2
* | x |
*
* 4. sin(x+y) = sin(x) + sin'(x')*y
* ~ sin(x) + (1-x*x/2)*y
* For better accuracy, let
* 3 2 2 2 2
* r = x *(S2+x *(S3+x *(S4+x *(S5+x *S6))))
* then 3 2
* sin(x) = x + (S1*x + (x *(r-y/2)+y))
*/
#include "fdlibm.h"
#ifdef __STDC__
static const double
#else
static double
#endif
half = 5.00000000000000000000e-01, /* 0x3FE00000, 0x00000000 */
S1 = -1.66666666666666324348e-01, /* 0xBFC55555, 0x55555549 */
S2 = 8.33333333332248946124e-03, /* 0x3F811111, 0x1110F8A6 */
S3 = -1.98412698298579493134e-04, /* 0xBF2A01A0, 0x19C161D5 */
S4 = 2.75573137070700676789e-06, /* 0x3EC71DE3, 0x57B1FE7D */
S5 = -2.50507602534068634195e-08, /* 0xBE5AE5E6, 0x8A2B9CEB */
S6 = 1.58969099521155010221e-10; /* 0x3DE5D93A, 0x5ACFD57C */
#ifdef __STDC__
double __kernel_sin(double x, double y, int iy)
#else
double __kernel_sin(x, y, iy)
double x,y; int iy; /* iy=0 if y is zero */
#endif
{
double z,r,v;
int ix;
ix = __HI(x)&0x7fffffff; /* high word of x */
if(ix<0x3e400000) /* |x| < 2**-27 */
{if((int)x==0) return x;} /* generate inexact */
z = x*x;
v = z*x;
r = S2+z*(S3+z*(S4+z*(S5+z*S6)));
if(iy==0) return x+v*(S1+z*r);
else return x-((z*(half*y-v*r)-y)-v*S1);
}
f(θ) = ap(θ) 1 + b p(θ) , 0 ≤ θ ≤ 180. 8100 = 4θ(180 − θ) 40500 − θ(180 − θ) . This gives Bhaskara's approximation formula for the sine function. Bhaskara's Approximation Formula: sin(θ◦) ≈ 4θ(180 − θ) 40500 − θ(180 − θ) , for 0 ≤ θ ≤ 180.
For example, although it would take some time to prove this, sin(x) is not a polynomial function (can you find real numbers a0,a1,...,an such that sin(x) = a0 + a1x + ··· anxn?).
The function sin(x) is represented in white, the first order polynomial in red, the third in cyan, the fifth in green and the seventh in yellow. It can be observed that the accuracy is better and better. As the order of the polynomial increases, the accuracy increases.
We can use trig identities to get everything down to 0≤x≤π/4, and then need a way to approximate sin x on that interval. On 0≤x≤2-27, we can just stick with sin x≈x (which the Taylor polynomial would also give, within the tolerance of a double).
The reason for not using a Taylor polynomial is in step 3 of the algorithm's comment. The Taylor polynomial gives (provable) accuracy near zero at the expense of less accuracy as you get away from zero. By the time you get to π/4, the 13th order Taylor polynomial (divided by x) differs from (sin x)/x by 3e-14. This is far worse than fblibm’s error of 2-58. In order to get that accurate with a Taylor polynomial, you’d need to go until (π/4)n-1/n! < 2-58, which takes another 2 or 3 terms.
So why does fblibm settle for an accuracy of 2-58? Because that’s past the tolerance of a double (which only has 52 bits in its mantissa).
In your case though, you’re wanting arbitrarily many bits of sin x. To use fblibm’s approach, you’d need to recalculate the coefficients whenever your desired accuracy changes. Your best approach seems to be to stick with the Taylor polynomial at 0, since it’s very easily computable, and take terms until (π/4)n-1/n! meets your desired accuracy.
njuffa had a useful idea of using identities to further restrict your domain. For example, sin(x) = 3*sin(x/3) - 4*sin^3(x/3)
. Using this would let you restrict your domain to 0≤x≤π/12. And you could use it twice to restrict your domain to 0≤x≤π/36. This would make it so that your Taylor expansion would have your desired accuracy much more quickly. And instead of trying to get an arbitrarily accurate value of π for (π/4)n-1/n!, I’d recommend rounding π up to 4 and going until 1/n! meets your desired accuracy (or 3-n/n! or 9-n/n! if you’ve used the trig identity once or twice).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With