Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is the result of the 32-bit program different from the 64-bit one?

Tags:

c

I was working on a assignment on integer byte level representation. And I wrote a little program:

e1.c

int main(void) {
    printf("%d\n", -2147483648 < 2147483647);
    return 0;
}

When I compiled a 32-bit version of the executable file using the C89 standard, with the command gcc e1.c -m64 -std=c89 -g -O0 -o e1, it worked as I expected: it printed 0 indicating that C compiler regarded the value 2147483648 as unsigned int, thus it converts the rest of the expression to unsigned int. But weirdly this relationship doesn't hold in the 64-bit version, which prints 1.

Can anyone explain that?

like image 649
Stephen.W Avatar asked Mar 20 '18 14:03

Stephen.W


People also ask

What is the main difference between 32-bit and 64-bit?

A 32-bit system has a limit of 32 bit Windows 3.2 GB of RAM. The limit in its addressable space doesn't allow you to use the entire physical memory space of 4GB. A 64-bit system enables its users to store up to 17 Billion GB of RAM.

What's the difference between Windows 32-bit and 64-bit?

The terms 32-bit and 64-bit refer to the way a computer's processor (also called a CPU), handles information. The 64-bit version of Windows handles large amounts of random access memory (RAM) more effectively than a 32-bit system.


1 Answers

The C89 spec reads:

The type of an integer constant is the first of the corresponding list in which its value can be represented. Unsuffixed decimal: int, long int, unsigned long int; [...]

Thus, the type of the literal 2147483648 depends on the size of int, long, and unsigned long, respectively. Let's assume int is 32 bits, as it is on many platforms (and is likely the case on your platforms).

On a 32-bit platform, it's common for long to be 32 bits. Thus, the type of 2147483648 would be unsigned long.

On a 64-bit platform, it's common for long to be 64 bits (though some platforms, like MSVC, will still use 32 bits for long). Thus, the type of 2147483648 would be long.

This leads to the discrepancy you see. In one case, you're negating an unsigned long, and in the other case, you're negating a long.

On a 32-bit platform, -2147483648 evaluates to 2147483648 (using the unsigned long type). Thus the resulting comparison is 2147483648 < 2147483647, which evaluates to 0.

On a 64-bit platform, -2147483648 evaluates to -2147483648 (using the long type). Thus the resulting comparison is -2147483648 < 2147483647, which evaluates to 1.

like image 124
Cornstalks Avatar answered Nov 11 '22 19:11

Cornstalks