Take an C++ integral variable i
, and suppose that you're multiplying its value by 2.
If i
has signedness, I believe that the operation is somewhat equivalent, at least mathematically, to:
i = i << 1;
But if i
's type is unsigned, then since unsigned values do not overflow but are performed modulo their range, presumably the operation is something like this:
i = (i << 1) & (decltype(i))-1;
Now, I figure that the actual machine instructions will probably be more concise than a sequence of shifts for multiplication. But would a modern, say x86, CPU have a specific instruction for unsigned/modulo math? Or will performing math with unsigned values tend to cost an additional instruction, when compared to math with signed values?
(Yes, it would be ridiculous to care about this whilst programming; I'm interested out of pure curiosity.)
unsigned leads to the same or better performance than signed . Some examples: Division by a constant which is a power of 2 (see also the answer from FredOverflow) Division by a constant number (for example, my compiler implements division by 13 using 2 asm instructions for unsigned, and 6 instructions for signed)
Variables such as integers can be represent in two ways, i.e., signed and unsigned. Signed numbers use sign flag or can be distinguish between negative values and positive values. Whereas unsigned numbers stored only positive numbers but not negative numbers.
If it is signed, the compiler uses signed operators for manipulating the variables (e.g. IDIV) and when unsigned, it uses another instruction (e.g. DIV). So the compiler makes a program which tells the CPU how to interpret the data.
Unsigned integers are used when we know that the value that we are storing will always be non-negative (zero or positive). Note: it is almost always the case that you could use a regular integer variable in place of an unsigned integer.
As the others already wrote: It does not matter for the CPU. Signed and unsigned instructions take the same time, Some operations in unsigned arithmetic are even easier to do and may require a cycle less than the signed variant (multi precision division is one example).
However, this is only half of the story.
C++ defines signed integer overflows as undefined behavior and unsigned integers as modulo2. This offers completely different optimization opportunities which lead to different code.
One example:
int foo (int a)
{
return a * 1000 / 500;
}
unsigned bar (unsigned a)
{
return a * 1000 / 500;
}
Here foo can be optimied to:
int foo (int a)
{
return a * 2;
}
And bar will stay the same it is.
Note that mathematically these two functions are the same, but they start to give different results if the argument exceeds INT_MAX/1000.
Since the effect of signed overflows is undefined the compiler has the option to just pretend there is no INT_MAX when it comes to simplify expressions. For unsigned arithmetic this is not the case and the compiler has to emit code that does the multiplication and division. This is of course slower than the optimized variant.
Note: Most compilers are conservative when it comes to such optimizations and only enable them if you ask for them because they tend to break code and overflow checks. Other compilers, especially in the embedded and DSP world otoh always do these kind of optimizations even at low optimization levels. The programmers who write for these kind of machines are aware of the subtle details, so this is rarely a problem.
OTOH we've discussed stories where C/C++ programmers fall into this trap more than once on stackoverflow.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With