I've checked the GCC buglist and the Clang buglist and don't see anything relevant yet.
This Wandbox link shows some C++11/C++14 code exercising decltype(x)
and decltype((x))
for various kinds of x
captured by lambdas. GCC and Clang give different answers for this code. Which of them, if either, is correct?
Here's the offending snippet:
// inside main()
int i = 42;
int &j = i;
[j=j](){
static_assert(std::is_same<decltype(j), GCC(const) int>::value,""); // A
static_assert(std::is_same<decltype((j)), const int&>::value,""); // B
}();
[=](){
static_assert(std::is_same<decltype(j), int&>::value,""); // C
static_assert(std::is_same<decltype((j)), CLANG(const) int&>::value,""); // D
}();
where:
#ifdef __clang__
#define CLANG(x) x
#define GCC(x)
#else
#define GCC(x) x
#define CLANG(x)
#endif
I believe that in both cases, the thing that is actually captured(*) is a (non-const) int
initialized to a copy of j
's value (that is to say, i
's value). Since the lambda isn't marked mutable
, its operator()
is going to be a const
member function. With those prerequisites out of the way, let's proceed...
On line // A
, GCC tells me that the decltype of the explicitly init-captured j
is const int
, when I'm almost positive that it ought to be int
(per Clang).
On line // B
, both compilers agree that (j)
is an lvalue referring to a const int (since the lambda is not marked mutable
); this makes perfect sense to me.
On line // C
, both compilers agree that j
is a name referring to the int&
declared on line 2. This is a consequence of 5.1.2 [expr.prim.lambda]/19, or rather, a consequence of the-thing-that-happens-when-that-clause-is-not-being-invoked. Inside a [=]
lambda, the name j
refers to the j
in the outer scope, but the expression (j)
refers to the (j)
that would exist if j
were to have been captured. I don't fully understand how this works or why it's desirable, but there it is. I'm willing to stipulate that this is not a bug in either compiler.
On line // D
, Clang tells me that (j)
is an lvalue referring to a const int, whereas GCC tells me that it's an lvalue referring to a non-const int. I'm pretty sure that Clang is right and GCC is wrong; decltype((j))
should be the same whether j
is captured implicitly or explicitly.
So:
// A
and // D
both bugs in GCC?(*) — In fact nothing is technically captured by the second lambda, because it doesn't use j
in any evaluated context. That's why lines // A
and // C
give different answers. But I don't know any nice terminology for the-thing-that-is-being-done-to-j
, so I'm just saying "captured".
I believe that both compilers are wrong for (A) and gcc is wrong for (D).
I believe that gcc is wrong for (A) and (D), while clang is correct for both.
The relevant sections of [expr.lambda.prim] are:
An init-capture behaves as if it declares and explicitly captures a variable of the form “auto init-capture ;” whose declarative region is the lambda-expression’s compound-statement, except that:
— if the capture is by copy (see below), the non-static data member declared for the capture and the variable are treated as two different ways of referring to the same object, which has the lifetime of the non-static data member, and no additional copy and destruction is performed,
and
Every id-expression within the compound-statement of a lambda-expression that is an odr-use (3.2) of an entity captured by copy is transformed into an access to the corresponding unnamed data member of the closure type.
decltype(j)
is not an odr-use of j
, therefore no such transformation should be considered. Thus, in the case of [=]{...}
, decltype(j)
should yield int&
. However, in the case of an init-capture, the behavior is as if there was a variable of the form auto j = j;
, and the variable j
refers to the same unnamed non-static data member with no such transformation necessary. So in the case of [j=j]{...}
, decltype(j)
should yield the type of that variable - which is int
. It is definitely not const int
. That is a bug.
The next relevant section:
Every occurrence of
decltype((x))
wherex
is a possibly parenthesized id-expression that names an entity of automatic storage duration is treated as ifx
were transformed into an access to a corresponding data member of the closure type that would have been declared ifx
were an odr-use of the denoted entity. [ Example:void f3() { float x, &r = x; [=] { // x and r are not captured (appearance in a decltype operand is not an odr-use) decltype(x) y1; // y1 has type float decltype((x)) y2 = y1; // y2 has type float const& because this lambda // is not mutable and x is an lvalue decltype(r) r1 = y1; // r1 has type float& (transformation not considered) decltype((r)) r2 = y2; // r2 has type float const& } }
—end example ]
The example further illustrates that decltype(j)
should be int&
in the implicit copy case and also demonstrates that decltype((j))
is treated as if x
were the corresponding data member that would have been declared: which is int const&
in both cases (as the lambda is not mutable
and j
is an lvalue). Your (C) and (D) cases exactly mirror the r1
, r2
declarations in the example. Which, while the examples are not normative, certainly suggests that gcc is in the wrong for having different behavior.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With