This example seems to compile with VC10 and gcc (though my version of gcc is very old).
EDIT: R. Martinho Fernandez tried this on gcc 4.7 and the behaviour is still the same.
struct Base
{
operator double() const { return 0.0; }
};
struct foo
{
foo(const char* c) {}
};
struct Something : public Base
{
void operator[](const foo& f) {}
};
int main()
{
Something d;
d["32"];
return 0;
}
But clang complains:
test4.cpp:19:6: error: use of overloaded operator '[]' is ambiguous (with operand types 'Something' and 'const char [3]')
d["32"]
~^~~~~
test4.cpp:13:10: note: candidate function
void operator[](const foo& f) {}
^
test4.cpp:19:6: note: built-in candidate operator[](long, const char *)
d["32"]
^
test4.cpp:19:6: note: built-in candidate operator[](long, const restrict char *)
test4.cpp:19:6: note: built-in candidate operator[](long, const volatile char *)
test4.cpp:19:6: note: built-in candidate operator[](long, const volatile restrict char *)
The overload resolution is considering two possible functions from looking at this expression:
If I had written d["32"]
as d.operator[]("32")
, then overload resolution won't even look at option 2, and clang will also compile fine.
EDIT: (clarification of questions)
This seems to be a complicated area in overload resolution, and because of that I'd appreciate very much answers that explain in detail the overload resolution in this case, and cite the standard (if there's some obscure/advanced likely to be unknown rule).
If clang is correct, I'm also interested in knowing why the two are ambiguous / one is not preferred over another. The answer likely would have to explain how overload resolution considers implicit conversions (both user defined and standard conversions) involved on the two candidates and why one is not better than the other.
Note: if operator double() is changed to operator bool(), all three (clang, vc, gcc) will refuse to compile with similar ambiguous error.
An implicit conversion sequence is the sequence of conversions required to convert an argument in a function call to the type of the corresponding parameter in a function declaration. The compiler tries to determine an implicit conversion sequence for each argument.
The process of selecting the most appropriate overloaded function or operator is called overload resolution. Suppose that f is an overloaded function name. When you call the overloaded function f() , the compiler creates a set of candidate functions.
Implicit type conversion also known as automatic type conversion is carried out by the compiler without the need for a user-initiated action. It takes place when an expression of more than one data type is present which in such an instance type conversion takes place to avoid data loss.
It should be easier to picture why the overload resolution is ambiguous by going through it step-by-step.
§13.5.5 [over.sub]
Thus, a subscripting expression
x[y]
is interpreted asx.operator[](y)
for a class objectx
of typeT
ifT::operator[](T1)
exists and if the operator is selected as the best match function by the overload resolution mechanism (13.3.3).
Now, we first need an overload set. That's constructed according to §13.3.1
and contains member aswell as non-member functions. See this answer of mine for a more detailed explanation.
§13.3.1 [over.match.funcs]
p2 The set of candidate functions can contain both member and non-member functions to be resolved against the same argument list. So that argument and parameter lists are comparable within this heterogeneous set, a member function is considered to have an extra parameter, called the implicit object parameter, which represents the object for which the member function has been called. [...]
p3 Similarly, when appropriate, the context can construct an argument list that contains an implied object argument to denote the object to be operated on.
// abstract overload set (return types omitted since irrelevant)
f1(Something&, foo const&); // linked to Something::operator[](foo const&)
f2(std::ptrdiff_t, char const*); // linked to operator[](std::ptrdiff_t, char const*)
f3(char const*, std::ptrdiff_t); // linked to operator[](char const*, std::ptrdiff_t)
Then, an argument list is constructed:
// abstract argument list
(Something&, char const[3]) // 'Something&' is the implied object argument
And then the argument list is tested against every member of the overload set:
f1 -> identity match on argument 1, conversion required for argument 2
f2 -> conversion required for argument 1, conversion required for argument 2 (decay)
f3 -> argument 1 incompatible, argument 2 incompatible, discarded
Then, since we found out that there are implicit conversions required, we take a look at §13.3.3 [over.match.best] p1
:
Define
ICSi(F)
as follows:
- if
F
is a static member function, [...]; otherwise,- let
ICSi(F)
denote the implicit conversion sequence that converts thei
-th argument in the list to the type of thei
-th parameter of viable functionF
. 13.3.3.1 defines the implicit conversion sequences and 13.3.3.2 defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Now let's construct those implicit conversion sequences for f1
and f2
in the overload set (§13.3.3.1
):
ICS1(f1): 'Something&' -> 'Someting&', standard conversion sequence
ICS2(f1): 'char const[3]' -> 'foo const&', user-defined conversion sequence
ICS1(f2): 'Something&' -> 'std::ptrdiff_t', user-defined conversion sequence
ICS2(f2): 'char const[3]' -> 'char const*', standard conversion sequence
§13.3.3.2 [over.ics.rank] p2
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion sequence or an ellipsis conversion sequence.
So ICS1(f1)
is better than ICS1(f2)
and ICS2(f1)
is worse than ICS2(f2)
.
Conversely, ICS1(f2)
is worse than ICS1(f1)
and ICS2(f2)
is better than ICS2(f1)
.
§13.3.3 [over.match.best]
p1 (cont.) Given these definitions, a viable function
F1
is defined to be a better function than another viable functionF2
if for all argumentsi
,ICSi(F1)
is not a worse conversion sequence thanICSi(F2)
, and then [...]p2 If there is exactly one viable function that is a better function than all other viable functions, then it is the one selected by overload resolution; otherwise the call is ill-formed.
Well, f*ck. :) As such, Clang is correct in rejecting that code.
It seems there is no question that both Something::operator[](const foo& f)
and the built-in operator[](long, const char *)
are viable candidate functions (13.3.2) for overload resolution. The types of real arguments are Something
and const char*
, and implicit conversion sequences (ICFs) I think are:
Something::operator[](const foo& f)
: (1-1) identity conversion, and (1-2) foo("32")
through foo::foo(const char*)
;operator[](long, const char *)
: (2-1) long(double(d))
through Something::operator double() const
(inherited from Base), and (2-2) identity conversion.Now if we rank these ICFs according to (13.3.3.2), we can see that (1-1) is a better conversion than (2-1), and (1-2) is a worse conversion than (2-2). According to the definition in (13.3.3),
a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), ...
Therefore, neither of the considered two candidate functions is better than the other one, and thus the call is ill-formed. I.e. Clang appears to be correct, and the code should not compile.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With