Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there actually a reason why overloaded && and || don't short circuit?

People also ask

Why does overloading happen?

There are two possible causes of overloading in an electrical circuit and they are: (a) When a single socket is used for connecting many appliances, it causes overloading and heating. (b) When the live and neutral wires come in direct contact with each other, it results in overloading as well as in short circuit.

Why overload new and delete?

The most common reason to overload new and delete are simply to check for memory leaks, and memory usage stats. Note that "memory leak" is usually generalized to memory errors. You can check for things such as double deletes and buffer overruns.

Why are operators overloaded?

It's a type of polymorphism in which an operator is overloaded to give it the user-defined meaning. C++ allows us to specify more than one definition for a function name or an operator in the same scope, which is called function overloading and operator overloading, respectively.


All design processes result in compromises between mutually incompatible goals. Unfortunately, the design process for the overloaded && operator in C++ produced a confusing end result: that the very feature you want from && -- its short-circuiting behavior -- is omitted.

The details of how that design process ended up in this unfortunate place, those I don't know. It is however relevant to see how a later design process took this unpleasant outcome into account. In C#, the overloaded && operator is short circuiting. How did the designers of C# achieve that?

One of the other answers suggests "lambda lifting". That is:

A && B

could be realized as something morally equivalent to:

operator_&& ( A, ()=> B )

where the second argument uses some mechanism for lazy evaluation so that when evaluated, the side effects and value of the expression are produced. The implementation of the overloaded operator would only do the lazy evaluation when necessary.

This is not what the C# design team did. (Aside: though lambda lifting is what I did when it came time to do expression tree representation of the ?? operator, which requires certain conversion operations to be performed lazily. Describing that in detail would however be a major digression. Suffice to say: lambda lifting works but is sufficiently heavyweight that we wished to avoid it.)

Rather, the C# solution breaks the problem down into two separate problems:

  • should we evaluate the right-hand operand?
  • if the answer to the above was "yes", then how do we combine the two operands?

Therefore the problem is solved by making it illegal to overload && directly. Rather, in C# you must overload two operators, each of which answers one of those two questions.

class C
{
    // Is this thing "false-ish"? If yes, we can skip computing the right
    // hand size of an &&
    public static bool operator false (C c) { whatever }

    // If we didn't skip the RHS, how do we combine them?
    public static C operator & (C left, C right) { whatever }
    ...

(Aside: actually, three. C# requires that if operator false is provided then operator true must also be provided, which answers the question: is this thing "true-ish?". Typically there would be no reason to provide only one such operator so C# requires both.)

Consider a statement of the form:

C cresult = cleft && cright;

The compiler generates code for this as thought you had written this pseudo-C#:

C cresult;
C tempLeft = cleft;
cresult = C.false(tempLeft) ? tempLeft : C.&(tempLeft, cright);

As you can see, the left hand side is always evaluated. If it is determined to be "false-ish" then it is the result. Otherwise, the right hand side is evaluated, and the eager user-defined operator & is invoked.

The || operator is defined in the analogous way, as an invocation of operator true and the eager | operator:

cresult = C.true(tempLeft) ? tempLeft : C.|(tempLeft , cright);

By defining all four operators -- true, false, & and | -- C# allows you to not only say cleft && cright but also non-short-circuiting cleft & cright, and also if (cleft) if (cright) ..., and c ? consequence : alternative and while(c), and so on.

Now, I said that all design processes are the result of compromise. Here the C# language designers managed to get short-circuiting && and || right, but doing so requires overloading four operators instead of two, which some people find confusing. The operator true/false feature is one of the least well understood features in C#. The goal of having a sensible and straightforward language that is familiar to C++ users was opposed by the desires to have short circuiting and the desire to not implement lambda lifting or other forms of lazy evaluation. I think that was a reasonable compromise position, but it is important to realize that it is a compromise position. Just a different compromise position than the designers of C++ landed on.

If the subject of language design for such operators interests you, consider reading my series on why C# does not define these operators on nullable Booleans:

http://ericlippert.com/2012/03/26/null-is-not-false-part-one/


The point is that (within the bounds of C++98) the right-hand operand would be passed to the overloaded operator function as argument. In doing so, it would already be evaluated. There is nothing the operator||() or operator&&() code could or could not do that would avoid this.

The original operator is different, because it's not a function, but implemented at a lower level of the language.

Additional language features could have made non-evaluation of the right-hand operand syntactically possible. However, they didn't bother because there are only a select few cases where this would be semantically useful. (Just like ? :, which is not available for overloading at all.

(It took them 16 years to get lambdas into the standard...)

As for the semantical use, consider:

objectA && objectB

This boils down to:

template< typename T >
ClassA.operator&&( T const & objectB )

Think about what exactly you'd like to do with objectB (of unknown type) here, other than calling a conversion operator to bool, and how you'd put that into words for the language definition.

And if you are calling conversion to bool, well...

objectA && obectB

does the same thing, now does it? So why overload in the first place?


A feature has to be thought of, designed, implemented, documented and shipped.

Now we thought of it, let's see why it might be easy now (and hard to do then). Also keep in mind that there's only a limited amount of resources, so adding it might have chopped something else (What would you like to forego for it?).


In theory, all operators could allow short-circuiting behavior with only one "minor" additional language-feature, as of C++11 (when lambdas were introduced, 32 years after "C with classes" started in 1979, a still respectable 16 after c++98):

C++ would just need a way to annotate an argument as lazy-evaluated - a hidden-lambda - to avoid the evaluation until neccessary and allowed (pre-conditions met).


What would that theoretical feature look like (Remember that any new features should be widely usable)?

An annotation lazy, which applied to a function-argument makes the function a template expecting a functor, and makes the compiler pack the expression into a functor:

A operator&&(B b, __lazy C c) {return c;}

// And be called like
exp_b && exp_c;
// or
operator&&(exp_b, exp_c);

It would look under the cover like:

template<class Func> A operator&&(B b, Func& f) {auto&& c = f(); return c;}
// With `f` restricted to no-argument functors returning a `C`.

// And the call:
operator&&(exp_b, [&]{return exp_c;});

Take special note that the lambda stays hidden, and will be called at most once.
There should be no performance-degradation due to this, aside from reduced chances of common-subexpression-elimination.


Beside implementation-complexity and conceptual complexity (every feature increases both, unless it sufficiently eases those complexities for some other features), let's look at another important consideration: Backwards-compatibility.

While this language-feature would not break any code, it would subtly change any API taking advantage of it, which means any use in existing libraries would be a silent breaking change.

BTW: This feature, while easier to use, is strictly stronger than the C# solution of splitting && and || into two functions each for separate definition.


With retrospective rationalization, mainly because

  • in order to have guaranteed short-circuiting (without introducing new syntax) the operators would have to be restricted to results actual first argument convertible to bool, and

  • short circuiting can be easily expressed in other ways, when needed.


For example, if a class T has associated && and || operators, then the expression

auto x = a && b || c;

where a, b and c are expressions of type T, can be expressed with short circuiting as

auto&& and_arg = a;
auto&& and_result = (and_arg? and_arg && b : and_arg);
auto x = (and_result? and_result : and_result || c);

or perhaps more clearly as

auto x = [&]() -> T_op_result
{
    auto&& and_arg = a;
    auto&& and_result = (and_arg? and_arg && b : and_arg);
    if( and_result ) { return and_result; } else { return and_result || b; }
}();

The apparent redundancy preserves any side-effects from the operator invocations.


While the lambda rewrite is more verbose, its better encapsulation allows one to define such operators.

I’m not entirely sure of the standard-conformance of all of the following (still a bit of influensa), but it compiles cleanly with Visual C++ 12.0 (2013) and MinGW g++ 4.8.2:

#include <iostream>
using namespace std;

void say( char const* s ) { cout << s; }

struct S
{
    using Op_result = S;

    bool value;
    auto is_true() const -> bool { say( "!! " ); return value; }

    friend
    auto operator&&( S const a, S const b )
        -> S
    { say( "&& " ); return a.value? b : a; }

    friend
    auto operator||( S const a, S const b )
        -> S
    { say( "|| " ); return a.value? a : b; }

    friend
    auto operator<<( ostream& stream, S const o )
        -> ostream&
    { return stream << o.value; }

};

template< class T >
auto is_true( T const& x ) -> bool { return !!x; }

template<>
auto is_true( S const& x ) -> bool { return x.is_true(); }

#define SHORTED_AND( a, b ) \
[&]() \
{ \
    auto&& and_arg = (a); \
    return (is_true( and_arg )? and_arg && (b) : and_arg); \
}()

#define SHORTED_OR( a, b ) \
[&]() \
{ \
    auto&& or_arg = (a); \
    return (is_true( or_arg )? or_arg : or_arg || (b)); \
}()

auto main()
    -> int
{
    cout << boolalpha;
    for( int a = 0; a <= 1; ++a )
    {
        for( int b = 0; b <= 1; ++b )
        {
            for( int c = 0; c <= 1; ++c )
            {
                S oa{!!a}, ob{!!b}, oc{!!c};
                cout << a << b << c << " -> ";
                auto x = SHORTED_OR( SHORTED_AND( oa, ob ), oc );
                cout << x << endl;
            }
        }
    }
}

Output:

000 -> !! !! || false
001 -> !! !! || true
010 -> !! !! || false
011 -> !! !! || true
100 -> !! && !! || false
101 -> !! && !! || true
110 -> !! && !! true
111 -> !! && !! true

Here each !! bang-bang shows a conversion to bool, i.e. an argument value check.

Since a compiler can easily do the same, and additionally optimize it, this is a demonstrated possible implementation and any claim of impossibility must be put in the same category as impossibility claims in general, namely, generally bollocks.


tl;dr: it is not worth the effort, due to very low demand (who would use the feature?) compared to rather high costs (special syntax needed).

The first thing that comes to mind is that operator overloading is just a fancy way to write functions, whereas the boolean version of the operators || and && are buitlin stuff. That means that the compiler has the freedom to short-circuit them, while the expression x = y && z with nonboolean y and z has to lead to a call to a function like X operator&& (Y, Z). This would mean that y && z is just a fancy way to write operator&&(y,z) which is just a call of an oddly named function where both parameters have to be evaluated before calling the function (including anything that would deem a short-circuiting appropiate).

However, one could argue that it should be possible to make the translation of && operators somewhat more sophisticated, like it is for the new operator which is translated into calling the function operator new followed by a constructor call.

Technically this would be no problem, one would have to define a language syntax specific for the precondition that enables short-circuiting. However, the use of short-circuits would be restricted to cases where Y is convetible to X, or else there had to be additional info of how to actually do the short circuiting (i.e. compute the result from only the first parameter). The result would have to look somewhat like this:

X operator&&(Y const& y, Z const& z)
{
  if (shortcircuitCondition(y))
    return shortcircuitEvaluation(y);

  <"Syntax for an evaluation-Point for z here">

  return actualImplementation(y,z);
}

One seldomly wants to overload operator|| and operator&&, because there seldomly is a case where writing a && b actually is intuitive in a nonboolean context. The only exceptions I know of are expression templates, e.g. for embedded DSLs. And only a handful of those few cases would benefit from short circuit evaluation. Expression templates usually don't, because they are used to form expression trees that are evaluated later, so you always need both sides of the expression.

In short: neither compiler writers nor standards authors felt the need to jump through hoops and define and implement additional cumbersome syntax, just because one in a million might get the idea that it would be nice to have short-circuiting on user defined operator&& and operator|| - just to get to the conclusion that it is not less effort than writing the logic per hand.