I was learning about undefined behaviour and stumbled upon this code without any clear explanation:
#include <stdio.h>
#include <limits.h>
int foo ( int x) {
printf ("% d\n" , x ); //2147483647
printf ("% d\n" , x+1 ); //-2147483648 overflow
return ( x+1 ) > x ; // 1 but How????
}
int main ( void ) {
printf ("% d\n" , INT_MAX ); //2147483647
printf ("% d\n" , INT_MAX+1 ); //-2147483648 overflow
printf ("% d\n" , ( INT_MAX+1 ) > INT_MAX ); //0 makes sense, since -ve < +ve
printf ("% d\n" , foo(INT_MAX) ); //1
return 0;
}
When compiling on gcc, the compiler issues a warning:
warning: integer overflow in expression of type 'int' results in '-2147483648'
So, clearly the value of INT_MAX+1
is negative, which explains why (INT_MAX+1) > INT_MAX
evaluates to 0.
But, why (or how) is (x+1) > x
evaluating to 1 for x = INT_MAX
in foo(...)
?
The limit does not exist.
Limit of sin(x)/x as x approaches 0. Showing that the limit of sin(x)/x as x approaches 0 is equal to 1.
1 Answer. Formula used: We have, Thus, the value of limx→1(x3−1x−1) lim x → 1 ( x 3 − 1 x − 1 ) is 3.
Some functions do not have any kind of limit as x tends to infinity. For example, consider the function f(x) = xsin x. This function does not get close to any particular real number as x gets large, because we can always choose a value of x to make f(x) larger than any number we choose.
Step 1: Replace every “x” on the right hand side and the left with the x-value given in the question. We’re told to evaluate the function at x = 2, so: Step 2: Solve the right hand side of the equation using the usual order of operations (PEMDAS): That’s it! Note: If you struggle with order of operations, this Symbolab calculator is a lifesaver!
After you enter the expression, Algebra Calculator will evaluate 2x for x=3: 2 (3) = 6. Here are more examples of how to evaluate expressions in Algebra Calculator.
Try entering 2x @ x=3 into the text box. After you enter the expression, Algebra Calculator will evaluate 2x for x=3: 2 (3) = 6. Here are more examples of how to evaluate expressions in Algebra Calculator. Feel free to try them now.
Step 1: Identify the x-value given in the question. The x-value here is the “x + h” written in parentheses f ( x + h ). Step 2: Replace every “x” on the right hand side and the left with the x-value from Step 1.
When a program exhibits undefined behavior, the C standard makes no predictions regarding what the program will do. The program may crash, it may output strange results, or it may appear to work properly.
In fact, compilers will often work under the assumption that a program does not contain undefined behavior.
In the case of this expression:
( x+1 ) > x
Given that x
has type int
, the compiler knows that signed overflow is UB and works under the assumption that it will not occur. With that in mind, there is no value for x
where this expression could be false, so the compiler could optimize away the expression and replace it with the value 1.
When I run this program under gcc 4.8.5, I get the following results with -O0
and -O1
:
2147483647
-2147483648
0
2147483647
-2147483648
0
And the following with -O2
and -O3
:
2147483647
-2147483648
0
2147483647
-2147483648
1
Then looking at the assembly for foo
in the later case:
foo:
.LFB11:
.file 1 "x1.c"
.loc 1 4 0
.cfi_startproc
.LVL0:
pushq %rbx // first call to printf
.cfi_def_cfa_offset 16
.cfi_offset 3, -16
.loc 1 5 0
movl %edi, %esi
.loc 1 4 0
movl %edi, %ebx
.loc 1 5 0
xorl %eax, %eax
movl $.LC0, %edi
.LVL1:
call printf
.LVL2:
.loc 1 6 0 // second call to printf
leal 1(%rbx), %esi
movl $.LC0, %edi
xorl %eax, %eax
call printf
.LVL3:
.loc 1 8 0 // return value
movl $1, %eax
popq %rbx
.cfi_def_cfa_offset 8
.LVL4:
ret
.cfi_endproc
We can see that's exactly what the compiler did: it optimized away the comparison and always returns 1.
This illustrates how compilers can make use of undefined behavior to apply various optimizations.
When the Standard was written, compilers for conventional architectures would often perform integer arithmetic in wraparound two's-complement fashion, but there were times when doing something else might be more useful. As a couple of examples:
If a program was known not to deliberately cause integer overflows, having an implementation trap on overflow would be less bad than having it output that was superficially valid but wrong.
Even on commonplace platforms, it was sometimes advantageous to perform arithmetic as though using a wider than specified type. For example, on the 8086, the multiply instruction would take two 16-bit operands and produce a 32-bit result, so when performing a computation like int32a=int16a*int16b+int32b;
, keeping the 32-bit result of the multiplication would be cheaper than using a sign-extension instruction to promote the bottom 16 bits of the result to 32 bits. Additionally, that abstraction model would allow many kinds of expressions to be simplified, such as replacing (x*30/15)
with (x*2)
, or (as shown in the example), x+y > x
with y > 0
.
Rather than trying to guess at all the ways it might be useful for an implementation to handle integer overflow, or risk preventing implementations from treating integer overflow in whatever fashion their customers would find most useful, the Standard lets implementations choose whatever method they think most useful. The authors of gcc have decided that that the most useful way to process integer overflow is to use it to produce extended inferences that aren't bound by normal laws of causality.
Consider, for example:
unsigned arr[32771];
unsigned mul_mod_32768(unsigned short x, unsigned short y)
{
/* Note that the authors of the Standard specified that the multiply
here happens as signed, because--according to the Rationale--they
expected that commonplace implementations would process signed and
unsigned math identically in cases like this! */
return (x * y) & 0x7FFFu;
}
void test(unsigned short n)
{
unsigned total=0;
unsigned short s2=65535;
for (unsigned short i=32768; i < n; i++)
{
total += mul_mod_32768(i, 65535);
}
if (n < 32770)
arr[n] = total;
}
At optimization level 2 or 3, gcc will generate code for test()
that is precisely equivalent to:
void test(unsigned short n)
{
arr[n] = 0;
}
If n is 32768 or less, the loop won't run at all, total will be zero, and total will be stored into arr[n]. If n is 32769, the loop will run once, adding 0 to total, which will then be stored into arr[n]. If n is 32770 or greater, the Standard won't impose any requirements, so gcc will process those cases the same way as it processed the others, blindly storing zero into arr[n].
The Standard deliberately makes no attempt to forbid implementations which are specialized for particular narrow purposes from behaving in ways that would make them unsuitable for many others. The behavior of gcc here may be suitable for use with programs that will process data exclusively from trustworthy sources, but that doesn't imply that it should be viewed as suitable for anything else. Unfortunately, the language clang and gcc seek to process is very different from the language the C Standards Committee was chartered to describe.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With