I'm just starting to learn C at school, I'm trying to get a hold of the basic concepts.
Our homework has a question,
for every int x: x+1 > x
Determine whether true or false, give reasoning if true and counterexample if false.
I'm confused because we were taught that the type int is of 32-bits and basically that means the integer is in binary format. Is x+1 adding 1 to the decimal value of 1?
x + 1 > x
is 1
for every int
value except for value INT_MAX
where INT_MAX + 1
is an overflow and therefore x + 1 > x
expression is undefined behavior for x
value of INT_MAX
.
This actually means a compiler has the right to optimize out the expression:
x + 1 > x
by
1
As INT_MAX + 1
is undefined behavior, the compiler has the right to say that for this specific >
expression INT_MAX + 1
is > INT_MAX
.
As the x + 1 > x
expression is undefined behavior for x == INT_MAX
, it is also not safe to assume x + 1 > x
can be false (0
).
Note that if x
was declared as an unsigned int
instead of an int
the situation is completely different. unsigned int
operands never overflow (they wrap around): UINT_MAX + 1 == 0
and therefore x + 1 > x
is 0
for x == UINT_MAX
and 1
for all the other x
values.
Modern compilers (like gcc
) usually take the opportunity to optimize this expression and replace it with 1
.
For the record, there was some serious security issues with known server programs using code like:
if (ptr + offset < ptr)
The code was meant to trigger a safety condition but the compiler would optimize out the if
statement (by replacing the expression with 0
) and it allowed an attacker to gain privilege escalation in the server program (by opening the possibility of an exploitable buffer overflow if I remember correctly).
Note for 32-bit number range is [-2147483648, 2147483647]
that is equals to [-231, 231 -1 ].
So for expression x+1 > x
is true for [-2147483648, 2147483646]
But not for 2147483647
because adding to 2147483647
in 32-bit size number causes bit overflow many implementations it makes x + 1
to -2147483648
But really behavior is
Undefined in C standard.
So,
x + 1 > x
True for x
in [-2147483648, 2147483646]
only x + 1 > x
, for x = 2147483647
is Undefined value may be True or False depends on compiler. If a compiler calculates = -2147483648
value will be False. I don't want to hand you the answer, so I'll reply with a question that should get you on the right track.
What is x + 1
when x
is the largest possible value that can be stored in a 32-bit signed integer? (2,147,483,647)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With