Many algorithms require to compute (-1)^n
(both integer), usually as a factor in a series. That is, a factor that is -1
for odd n and 1
for even n. In a C++ environment, one often sees:
#include<iostream> #include<cmath> int main(){ int n = 13; std::cout << std::pow(-1, n) << std::endl; }
What is better or the usual convention? (or something else),
std::pow(-1, n) std::pow(-1, n%2) (n%2?-1:1) (1-2*(n%2)) // (gives incorrect value for negative n)
In addition, user @SeverinPappadeux proposed another alternative based on (a global?) array lookups. My version of it is:
const int res[] {-1, 1, -1}; // three elements are needed for negative modulo results const int* const m1pow = res + 1; ... m1pow[n%2]
This is not probably not going to settle the question but, by using the emitted code we can discard some options.
1 - ((n & 1) << 1);
(7 operation, no memory access)
mov eax, DWORD PTR [rbp-20] add eax, eax and eax, 2 mov edx, 1 sub edx, eax mov eax, edx mov DWORD PTR [rbp-16], eax
and
retvals[n&1];
(5 operations, memory --registers?-- access)
mov eax, DWORD PTR [rbp-20] and eax, 1 cdqe mov eax, DWORD PTR main::retvals[0+rax*4] mov DWORD PTR [rbp-8], eax
1 - ((n & 1) << 1);
(4 operation, no memory access)
add edx, edx mov ebp, 1 and edx, 2 sub ebp, edx
.
retvals[n&1];
(4 operations, memory --registers?-- access)
mov eax, edx and eax, 1 movsx rcx, eax mov r12d, DWORD PTR main::retvals[0+rcx*4]
.
n%2?-1:1;
(4 operations, no memory access)
cmp eax, 1 sbb ebx, ebx and ebx, 2 sub ebx, 1
The test are here. I had to some some acrobatics to have meaningful code that doesn't elide operations all together.
So at the end it depends on the level optimization and expressiveness:
1 - ((n & 1) << 1);
is always good but not very expressive.retvals[n&1];
pays a price for memory access.n%2?-1:1;
is expressive and good but only with optimization.(−1)n+1 n converges conditionally. 1 n diverges and the alternating harmonic series converges.
The formula of the sum of first n natural numbers is S=n(n+1)2 .
Short answer:measures the squared deviations from x rather than μ . The xi's tend to be closer to their average x rather than μ , so we compensate for this by using the divisor (n-1) rather than n.
You can use (n & 1)
instead of n % 2
and << 1
instead of * 2
if you want to be super-pedantic, er I mean optimized.
So the fastest way to compute in an 8086 processor is:
1 - ((n & 1) << 1)
I just want to clarify where this answer is coming from. The original poster alfC did an excellent job of posting a lot of different ways to compute (-1)^n some being faster than others.
Nowadays with processors being as fast as they are and optimizing compilers being as good as they are we usually value readability over the slight (even negligible) improvements from shaving a few CPU cycles from an operation.
There was a time when one pass compilers ruled the earth and MUL operations were new and decadent; in those days a power of 2 operation was an invitation for gratuitous optimization.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With