Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does the compiler at casting integer constants?

Using the following macro:

#define MIN_SWORD (signed int) 0x8000

In e.g. the following expression:

signed long s32;
if (s32 < (signed long)MIN_SWORD)...

is expected to do the following check:

if (s32 < -32768)

One some compilers it seems to work fine. But on some other compiler the exprssion is evaluated as:

if (s32 < 32768)

My question: How is a ANSI-C compiler supposed to evaluate the following expression: (signed long) (signed int) 0x8000?

It seems that on some compilers the cast to `(signed int) does not cause the (expected) conversion from the positive constant 0x8000 to the minimum negative value of a signed int, if afterwards the expression is casted to the wider type of signed long. In other words, the evaluated constant is not equivalent to: -32768L (but 32768L)

Is this behavior maybe undefined by ANSI-C?

like image 364
Oliver Avatar asked Feb 22 '26 05:02

Oliver


1 Answers

If an int is 16-bit on your platform, then the type of 0x8000 is unsigned int (see 6.4.4 p.5 of the standard). Converting to a signed int is implementation-defined if the value cannot be represented (see 6.3.1.3 p.3). So the behaviour of your code is implementation-defined.

Having said that, in practice, I would've assumed that this should always do what you "expect". What compiler is this?

like image 107
Oliver Charlesworth Avatar answered Feb 23 '26 17:02

Oliver Charlesworth