Consider the following:
struct A {
A(float ) { }
A(int ) { }
};
int main() {
A{1.1}; // error: ambiguous
}
This fails to compile with an error about an ambiguous overload of A::A
. Both candidates are considered viable, because the requirement is simply:
Second, for
F
to be a viable function, there shall exist for each argument an implicit conversion sequence (13.3.3.1) that converts that argument to the corresponding parameter ofF
.
While there is an implicit conversion sequence from double
to int
, the A(int )
overload isn't actually viable (in the canonical, non-C++-standard sense) - that would involve a narrowing conversion and thus be ill-formed.
Why are narrowing conversions not considered in the process of determining viable candidates? Are there any other situations where an overload is considered ambiguous despite only one candidate being viable?
A problem lies with the fact that narrowing conversions can be detected not based on types.
There are very complex ways to generate values at compile time in C++.
Blocking narrowing conversions is a good thing. Making the overload resolution of C++ even more complex than it already is is a bad thing.
Ignoring narrowing conversion rules when determining overload resolution (which makes overload resolution purely about types), and then erroring out when the selected overload results in a narrowing conversion, keeps overload resolution from being even more complex, and adds in a way to detect and prevent narrowing conversions.
Two examples where only one candidate is viable would be template functions that fail "late", during instantiation, and copy-list initialization (where explicit
constructors are considered, but if they are chosen, you get an error). Similarly, having that impact overload resolution would make overload resolution even more complex than it already is.
Now, one might ask, why not fold narrowing conversion purely into the type system?
Making narrowing conversion be purely type-based would be non-viable. Such changes could break huge amounts of "legacy" code that the compiler could prove as being valid. The effort required to sweep a code base is far more worthwhile when most of the errors are actual errors, and not the new compiler version being a jerk.
unsigned char buff[]={0xff, 0x00, 0x1f};
this would fail under a type-based narrowing conversion, as 0xff
is of type int
, and such code is very common.
Had such code required pointless modification of the int
literals to unsigned char
literals, odds are the sweep would have ended with us setting a flag to tell the compiler to shut up about the stupid error.
Narrowing is something the compiler only knows about for built-in types. A user defined implicit conversion can't be marked as narrowing or not.
Narrowing conversions shouldn't be permitted to be implicit in the first place. (Unfortunately it was required for C compatibility. This has been somewhat corrected with {}
initialization prohibiting narrowing for built-in types.)
Given these, it makes sense that the overload rules don't bother to mention this special case. It might be an occasional convenience, but it's not all that valuable. IMO it's better in general to have fewer factors involved in overload resolution and to reject more things as ambiguous, forcing the programmer to resolve such things explicitly.
Also, double to float is a narrowing conversion when the double isn't a constant expression or if the double is too large.
#include <iostream>
#include <iomanip>
int main() {
double d{1.1};
float f{d};
std::cout << std::setprecision(100) << d << " " << f << '\n';
}
This will normally produce an error:
main.cpp:7:13: error: non-constant-expression cannot be narrowed from type 'double' to 'float' in initializer list [-Wc++11-narrowing]
float f{d};
^
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With