I would like to compute both the sine and co-sine of a value together (for example to create a rotation matrix). Of course I could compute them separately one after another like a = cos(x); b = sin(x);
, but I wonder if there is a faster way when needing both values.
Edit: To summarize the answers so far:
Vlad said, that there is the asm command FSINCOS
computing both of them (in almost the same time as a call to FSIN
alone)
Like Chi noticed, this optimization is sometimes already done by the compiler (when using optimization flags).
caf pointed out, that functions sincos
and sincosf
are probably available and can be called directly by just including math.h
tanascius approach of using a look-up table is discussed controversial. (However on my computer and in a benchmark scenario it runs 3x faster than sincos
with almost the same accuracy for 32-bit floating points.)
Joel Goodwin linked to an interesting approach of an extremly fast approximation technique with quite good accuray (for me, this is even faster then the table look-up)
The most well known approximation method is to use a Taylor series about 0 (also known as a Maclaurin series), which for sine becomes: x - 1/6 x^3 + 1/120 x^5 - 1/5040 x^7 + ...
From the unit circle we see that sin x and cos x can only have the same value in two places, in x = /4 and x = 5 /4 (45° and 225°). The equation sin x = cos x can also be solved by dividing through by cos x. If we put k = 0 and k = 1 we get the solutions /4 (45°) and /4 + = 5 /4 (45°+ 180°= 225°).
Modern Intel/AMD processors have instruction FSINCOS
for calculating sine and cosine functions simultaneously. If you need strong optimization, perhaps you should use it.
Here is a small example: http://home.broadpark.no/~alein/fsincos.html
Here is another example (for MSVC): http://www.codeguru.com/forum/showthread.php?t=328669
Here is yet another example (with gcc): http://www.allegro.cc/forums/thread/588470
Hope one of them helps. (I didn't use this instruction myself, sorry.)
As they are supported on processor level, I expect them to be way much faster than table lookups.
Edit:
Wikipedia suggests that FSINCOS
was added at 387 processors, so you can hardly find a processor which doesn't support it.
Edit:
Intel's documentation states that FSINCOS
is just about 5 times slower than FDIV
(i.e., floating point division).
Edit:
Please note that not all modern compilers optimize calculation of sine and cosine into a call to FSINCOS
. In particular, my VS 2008 didn't do it that way.
Edit:
The first example link is dead, but there is still a version at the Wayback Machine.
If you are willing to use a commercial product, and are calculating a number of sin/cos calculations at the same time (so you can use vectored functions), you should check out Intel's Math Kernel Library.
(dead link) It has a sincos function
According to that documentation, it averages 13.08 clocks/element on core 2 duo in high accuracy mode, which i think will be even faster than fsincos.
Technically, you’d achieve this by using complex numbers and Euler’s Formula. Thus, something like (C++)
complex<double> res = exp(complex<double>(0, x));
// or equivalent
complex<double> res = polar<double>(1, x);
double sin_x = res.imag();
double cos_x = res.real();
should give you sine and cosine in one step. How this is done internally is a question of the compiler and library being used. It could (and might) well take longer to do it this way (just because Euler’s Formula is mostly used to compute the complex exp
using sin
and cos
– and not the other way round) but there might be some theoretical optimisation possible.
Edit
The headers in <complex>
for GNU C++ 4.2 are using explicit calculations of sin
and cos
inside polar
, so it doesn’t look too good for optimisations there unless the compiler does some magic (see the -ffast-math
and -mfpmath
switches as written in Chi’s answer).
When you need performance, you could use a precalculated sin/cos table (one table will do, stored as a Dictionary). Well, it depends on the accuracy you need (maybe the table would be too big), but it should be really fast.
You could compute either and then use the identity:
cos(x)2 = 1 - sin(x)2
but as @tanascius says, a precomputed table is the way to go.
If you use the GNU C library, then you can do:
#define _GNU_SOURCE
#include <math.h>
and you will get declarations of the sincos()
, sincosf()
and sincosl()
functions that calculate both values together - presumably in the fastest way for your target architecture.
There is very interesting stuff on this forum page, which is focused on finding good approximations that are fast: http://www.devmaster.net/forums/showthread.php?t=5784
Disclaimer: Not used any of this stuff myself.
Update 22 Feb 2018: Wayback Machine is the only way to visit the original page now: https://web.archive.org/web/20130927121234/http://devmaster.net/posts/9648/fast-and-accurate-sine-cosine
Many C math libraries, as caf indicates, already have sincos(). The notable exception is MSVC.
And regarding look-up, Eric S. Raymond in the Art of Unix Programming (2004) (Chapter 12) says explicitly this a Bad Idea (at the present moment in time):
"Another example is precomputing small tables--for example, a table of sin(x) by degree for optimizing rotations in a 3D graphics engine will take 365 × 4 bytes on a modern machine. Before processors got enough faster than memory to demand caching, this was an obvious speed optimization. Nowadays it may be faster to recompute each time rather than pay for the percentage of additional cache misses caused by the table.
"But in the future, this might turn around again as caches grow larger. More generally, many optimizations are temporary and can easily turn into pessimizations as cost ratios change. The only way to know is to measure and see." (from the Art of Unix Programming)
But, judging from the discussion above, not everyone agrees.
I don't believe that lookup tables are necessarily a good idea for this problem. Unless your accuracy requirements are very low the table needs to be very large. And modern CPUs can do a lot of computation while a value is fetched from main memory. This is not one of those questions which can be properly answered by argument (not even mine), test and measure and consider the data.
But I'd look to the fast implementations of SinCos that you find in libraries such as AMD's ACML and Intel's MKL.
This article shows how to construct a parabolic algorithm that generates both the sine and the cosine:
DSP Trick: Simultaneous Parabolic Approximation of Sin and Cos
http://www.dspguru.com/dsp/tricks/parabolic-approximation-of-sin-and-cos
When performance is critical for this kind of thing it is not unusual to introduce a lookup table.
For a creative approach, how about expanding the Taylor series? Since they have similar terms, you could do something like the following pseudo:
numerator = x
denominator = 1
sine = x
cosine = 1
op = -1
fact = 1
while (not enough precision) {
fact++
denominator *= fact
numerator *= x
cosine += op * numerator / denominator
fact++
denominator *= fact
numerator *= x
sine += op * numerator / denominator
op *= -1
}
This means you do something like this: starting at x and 1 for sin and cosine, follow the pattern - subtract x^2 / 2! from cosine, subtract x^3 / 3! from sine, add x^4 / 4! to cosine, add x^5 / 5! to sine...
I have no idea whether this would be performant. If you need less precision than the built in sin() and cos() give you, it may be an option.
There is a nice solution in the CEPHES library which can be pretty fast and you can add/remove accuracy quite flexibly for a bit more/less CPU time.
Remember that cos(x) and sin(x) are the real and imaginary parts of exp(ix). So we want to calculate exp(ix) to get both. We precalculate exp(iy) for some discrete values of y between 0 and 2pi. We shift x to the interval [0, 2pi). Then we select the y that is closest to x and write
exp(ix)=exp(iy+(ix-iy))=exp(iy)exp(i(x-y)).
We get exp(iy) from the lookup table. And since |x-y| is small (at most half the distance between the y-values), the Taylor series will converge nicely in just a few terms, so we use that for exp(i(x-y)). And then we just need a complex multiplication to get exp(ix).
Another nice property of this is that you can vectorize it using SSE.
You may want to have a look at http://gruntthepeon.free.fr/ssemath/, which offers an SSE vectorized implementation inspired from CEPHES library. It has good accuracy (maximum deviation from sin/cos on the order of 5e-8) and speed (slightly outperforms fsincos on a single call basis, and a clear winner over multiple values).
I have posted a solution involving inline ARM assembly capable of computing both the sine and cosine of two angles at a time here: Fast sine/cosine for ARMv7+NEON
An accurate yet fast approximation of sin and cos function simultaneously, in javascript, can be found here: http://danisraelmalta.github.io/Fmath/ (easily imported to c/c++)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With