Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can branches with undefined behavior be assumed unreachable and optimized as dead code?

Consider the following statement:

*((char*)NULL) = 0; //undefined behavior 

It clearly invokes undefined behavior. Does the existence of such a statement in a given program mean that the whole program is undefined or that behavior only becomes undefined once control flow hits this statement?

Would the following program be well-defined in case the user never enters the number 3?

while (true) {  int num = ReadNumberFromConsole();  if (num == 3)   *((char*)NULL) = 0; //undefined behavior } 

Or is it entirely undefined behavior no matter what the user enters?

Also, can the compiler assume that undefined behavior will never be executed at runtime? That would allow for reasoning backwards in time:

int num = ReadNumberFromConsole();  if (num == 3) {  PrintToConsole(num);  *((char*)NULL) = 0; //undefined behavior } 

Here, the compiler could reason that in case num == 3 we will always invoke undefined behavior. Therefore, this case must be impossible and the number does not need to be printed. The entire if statement could be optimized out. Is this kind of backwards reasoning allowed according to the standard?

like image 479
usr Avatar asked Apr 18 '14 11:04

usr


2 Answers

Does the existence of such a statement in a given program mean that the whole program is undefined or that behavior only becomes undefined once control flow hits this statement?

Neither. The first condition is too strong and the second is too weak.

Object access are sometimes sequenced, but the standard describes the behavior of the program outside of time. Danvil already quoted:

if any such execution contains an undefined operation, this International Standard places no requirement on the implementation executing that program with that input (not even with regard to operations preceding the first undefined operation)

This can be interpreted:

If the execution of the program yields undefined behavior, then the whole program has undefined behavior.

So, an unreachable statement with UB doesn't give the program UB. A reachable statement that (because of the values of inputs) is never reached, doesn't give the program UB. That's why your first condition is too strong.

Now, the compiler cannot in general tell what has UB. So to allow the optimizer to re-order statements with potential UB that would be re-orderable should their behavior be defined, it's necessary to permit UB to "reach back in time" and go wrong prior to the preceding sequence point (or in C++11 terminology, for the UB to affect things that are sequenced before the UB thing). Therefore your second condition is too weak.

A major example of this is when the optimizer relies on strict aliasing. The whole point of the strict aliasing rules is to allow the compiler to re-order operations that could not validly be re-ordered if it were possible that the pointers in question alias the same memory. So if you use illegally aliasing pointers, and UB does occur, then it can easily affect a statement "before" the UB statement. As far as the abstract machine is concerned the UB statement has not been executed yet. As far as the actual object code is concerned, it has been partly or fully executed. But the standard doesn't try to get into detail about what it means for the optimizer to re-order statements, or what the implications of that are for UB. It just gives the implementation license to go wrong as soon as it pleases.

You can think of this as, "UB has a time machine".

Specifically to answer your examples:

  • Behavior is only undefined if 3 is read.
  • Compilers can and do eliminate code as dead if a basic block contains an operation certain to be undefined. They're permitted (and I'm guessing do) in cases which aren't a basic block but where all branches lead to UB. This example isn't a candidate unless PrintToConsole(3) is somehow known to be sure to return. It could throw an exception or whatever.

A similar example to your second is the gcc option -fdelete-null-pointer-checks, which can take code like this (I haven't checked this specific example, consider it illustrative of the general idea):

void foo(int *p) {     if (p) *p = 3;     std::cout << *p << '\n'; } 

and change it to:

*p = 3; std::cout << "3\n"; 

Why? Because if p is null then the code has UB anyway, so the compiler may assume it is not null and optimize accordingly. The linux kernel tripped over this (https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-1897) essentially because it operates in a mode where dereferencing a null pointer isn't supposed to be UB, it's expected to result in a defined hardware exception that the kernel can handle. When optimization is enabled, gcc requires the use of -fno-delete-null-pointer-checks in order to provide that beyond-standard guarantee.

P.S. The practical answer to the question "when does undefined behavior strike?" is "10 minutes before you were planning to leave for the day".

like image 176
Steve Jessop Avatar answered Sep 23 '22 15:09

Steve Jessop


The standard states at 1.9/4

[ Note: This International Standard imposes no requirements on the behavior of programs that contain undefined behavior. — end note ]

The interesting point is probably what "contain" means. A little later at 1.9/5 it states:

However, if any such execution contains an undefined operation, this International Standard places no requirement on the implementation executing that program with that input (not even with regard to operations preceding the first undefined operation)

Here it specifically mentions "execution ... with that input". I would interpret that as, undefined behaviour in one possible branch which is not executed right now does not influence the current branch of execution.

A different issue however are assumptions based on undefined behaviour during code generation. See the answer of Steve Jessop for more details about that.

like image 21
Danvil Avatar answered Sep 21 '22 15:09

Danvil