Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it legal for the compiler to degrade the time complexity of a program? Is this considered observable behavior?

(Note: This is intended to be a language-lawyer question; I'm not referring to any particular existing compilers.)

When, if ever, is the compiler allowed to degrade the time complexity of a program?
Under what circumstances (if any) is this considered "observable behavior", and why?
(For example, can the compiler legally "reduce" a polynomial-time program to an exponential-time one?)

If the answer differs in C and C++, or in different versions of either, then please explain the differences.

like image 508
user541686 Avatar asked Oct 07 '14 07:10

user541686


3 Answers

The C standard doesn't actually have a time complexity model, neither for its primitive operations, nor its library functions, so compilers are allowed to do pretty much anything that preserves program semantics (observable behavior).

The C++ standard only gives complexity guarantees only for some its library functions, and says (17.5.1.4 [structure.specifications]):

Complexity requirements specified in the library clauses are upper bounds, and implementations that provide better complexity guarantees satisfy the requirements.

A compiler better preserve these bounds (and since many of the functions are templated/may be inlined, the compiler is involved), but the bounds are in terms of the number of elements in containers and restrict the number of calls to comparison operators and the like. Otherwise, the compiler is again free to do as it pleases.

like image 135
Fred Foo Avatar answered Oct 31 '22 21:10

Fred Foo


Performance of the code is not considered observable behavior and could potentially be modified by the compiler in either direction. In practical terms, for quality of implementation (QoI) reasons compilers don't degrade your programs, but there are cases where QoI is not performance.

A compiler, given the appropriate flags, could add instrumentation to the program it is building for debugging purposes (this is often the case in library implementations, for example with checked iterators).

Note that the simple answer to when the compiler would degrade your program is twofold: when the client asks for it, or when the implementor doesn't want to have users for the compiler.

like image 24
David Rodríguez - dribeas Avatar answered Oct 31 '22 23:10

David Rodríguez - dribeas


5.1.2.3 in the C standard says

The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant.

The C++ standard has similar wording in 1.9 [intro.execution]

Both standards have the same definition of observable behaviour:

The least requirements on a conforming implementation are:
— Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.
— At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
— The input and output dynamics of interactive devices shall take place as specified in 7.21.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
This is the observable behavior of the program.

So anything else, e.g. performance of a for loop, or the number of reads/writes done for non-volatile variables, is not considered observable and so there are no corresponding performance requirements on the compiler.

If the compiler wanted to re-evaluate a block of code 100 times (assuming it had no observable side-effects, only altering the state of non-volatile variables) and check that the same results were obtained every time (and not affected by cosmic rays or faulty hardware) that would be allowed by the standard.

like image 16
Jonathan Wakely Avatar answered Oct 31 '22 21:10

Jonathan Wakely