I read through several very good answers about undefined behaviour and sequence points (e.g. Undefined behavior and sequence points) and I understand, that
int i = 1;
a = i + i++; //this is undefined behaviour
is undefined code, according to the C++ standard. But what is the deeper reasoning behind it being undefined behaviour? Wouldn't it be enough to make it unspecified behaviour? The normal argument is, that by having few sequence points, C++ compilers can optimize better for different architectures, but wouldn't leaving it unspecified allow those optimizations as well? In
a = foo(bar(1), bar(2)); //this is unspecified behaviour
the compiler can also optimize, and it is not undefined behaviour. In the first example it seems clear, that a is either 2 or 3, so the semantics seems to be clear to me. I hope there is a reasoning, why some things are unspecified, and others are undefined.
Undefined Behavior results in unpredicted behavior of the entire program. But in unspecified behavior, the program makes choice at a particular junction and continue as usual like originally function executes.
Undefined behavior exists mainly to give the compiler freedom to optimize. One thing it allows the compiler to do, for example, is to operate under the assumption that certain things can't happen (without having to first prove that they can't happen, which would often be very difficult or impossible).
Undefined behavior (often abbreviated UB) is the result of executing code whose behavior is not well defined by the C++ language. In this case, the C++ language doesn't have any rules determining what happens if you use the value of a variable that has not been given a known value.
A program can be said to contain unspecified behavior when its source code may produce an executable that exhibits different behavior when compiled on a different compiler, or on the same compiler with different settings, or indeed in different parts of the same executable.
Not all of those optimizations. Itanium, for example, could perform both the add and increment in parallel, and it might cough up, say, a hardware exception for trying to do something like this.
But this is a completely micro-optimization, writing the compiler to take advantage of that was extremely difficult, and it's an extremely rare architecture that can do it (none existed at the time, IIRC, it was mostly hypothetical). So the reality is that as of 2012, there is no reason for it not to be well-defined behaviour, and indeed, C++11 made more of these situations well-defined.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With