Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C Convert a Short Int to Unsigned Short

A short int in C contains 16 bits and the first bit represents whether or not the value is negative or positive. I have a C program that is as follows:

int main() {
    short int v;
    unsigned short int uv;
    v = -20000;
    uv = v;
    printf("\nuv = %hu\n", uv);
    return 0;
}

Since the value of v is negative I know the first bit of the variable is a 1. So I expect the output of the program to equal uv = 52,768 b/c 20,000 + (2^15) = 52,768.

Instead I am getting uv = 45536 as the output. What part of my logic is incorrect?

like image 614
RyeGuy Avatar asked Mar 09 '23 05:03

RyeGuy


1 Answers

The behavior you're seeing can be explained by the conversion rules of C:

6.3.1.3 Signed and unsigned integers

1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.

2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

(This quote is from C99.)

-20000 can't be represented by an unsigned short because it's negative. The target type is unsigned, so the value is converted by repeatedly adding 65536 (which is USHORT_MAX + 1) until it is in range: -20000 + 65536 is exactly 45536.

Note that this behavior is mandated by the C standard and has nothing to do with how negative numbers are actually represented in memory (in particular, it works the same way even on machines using sign/magnitude or ones' complement).

like image 186
melpomene Avatar answered Mar 25 '23 00:03

melpomene