Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Defined argument evaluation order leads to sub-optimal code?

It is a known fact that argument evaluation order in c and c++ are not defined: for example: foo(a(),b()) In the above call it is up to the implementation of the compiler to decide which order of evaluation to pick and hence forth which function to execute first. Lately one of my friends asked why is the order of evaluation unspecified in C or C++. When I googled it, I came to know that specifying an evaluation order would lead to sub-optimal code generation. But how is it so? Why would a defined order of evaluation of arguments lead to sub-optimal code? And when I referred to Java's argument evaluation order. I found the following in the spec.

15.7.4. Argument Lists are Evaluated Left-to-Right

In a method or constructor invocation or class instance creation expression, argument expressions may appear within the parentheses, separated by commas. Each argument expression appears to be fully evaluated before any part of any argument expression to its right. If evaluation of an argument expression completes abruptly, no part of any argument expression to its right appears to have been evaluated?

That being the case, Java has a defined argument evaluation order, but saying C or C++ compilers would yield sub-optimal code if such a behavior is specified seems a little odd. Can you throw some light on this?

like image 396
sasidhar Avatar asked Jul 12 '12 10:07

sasidhar


1 Answers

It's partially historical: on processors with few registers, for example, one traditional (and simple) optimization technique is to evaluate the subexpression which needs the most registers first. If one subexpression requires 5 registers, and the other 4, for example, you can save the results of the one requiring 5 in the register not needed by the one requiring 4.

This is probably less relevant that usually thought. The compiler can reorder (even in Java) if the expressions have no side effects, or the reordering doesn't change the observable behavior of the program. Modern compilers are able to determing this far better than compilers twenty or more years ago (when the C++ rule was formulated). And presumably, when they aren't able to determine this, you're doing enough in each expression that the extra spill to memory doesn't matter.

At least, that's my gut feeling. I've been told by at least one person who actually works on optimizers that it would make a significant difference, so I won't say that I'm sure about it.

EDIT:

Just to add some comments with regards to the Java model. When Java was being designed, it was designed as an interpreted langauge. Extreme performance wasn't an issue; the goal was extreme safety, and reproduceability. Thus, it specifies many things very precisely, so that any program which compiles will have exactly the same behavior regardless of the platform. There was supposed to be no undefined behavior, no implementation defined behavior, and no unspecified behavior. Regardless of cost (but with the belief that this could be done at reasonable cost on any of the most widespread machines). One initial design goals of C (and indirectly C++) was that unnecessary extra runtime cost should be minimum, that consistency between platforms wasn't a goal (since at the times, even common platforms varied greatly), and that safety, while a concern, wasn't primordial. While the attitudes have evolved some, there is still a goal to be able to support, efficiently, any machine which might be out there. Without requiring the newest, most complex compiler technologies. And different goals naturally lead to different solutions.

like image 110
James Kanze Avatar answered Sep 20 '22 00:09

James Kanze