C++0x is going to make the following code and similar code ill-formed, because it requires a so-called narrowing conversion of a double
to a int
.
int a[] = { 1.0 };
I'm wondering whether this kind of initialization is used much in real world code. How many code will be broken by this change? Is it much effort to fix this in your code, if your code is affected at all?
For reference, see 8.5.4/6 of n3225
A narrowing conversion is an implicit conversion
- from a floating-point type to an integer type, or
- from long double to double or float, or from double to float, except where the source is a constant expression and the actual value after conversion is within the range of values that can be represented (even if it cannot be represented exactly), or
- from an integer type or unscoped enumeration type to a floating-point type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type, or
- from an integer type or unscoped enumeration type to an integer type that cannot represent all the values of the original type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type.
A narrowing conversion changes a value to a data type that might not be able to hold some of the possible values. For example, a fractional value is rounded when it is converted to an integral type, and a numeric type being converted to Boolean is reduced to either True or False .
If you make a narrowing conversion intentionally, make your intentions explicit by using a static cast. Otherwise, this error message almost always indicates you have a bug in your code. You can fix it by making sure the objects you initialize have types that are large enough to handle the inputs.
I ran into this breaking change when I used GCC. The compiler printed an error for code like this:
void foo(const unsigned long long &i) { unsigned int a[2] = {i & 0xFFFFFFFF, i >> 32}; }
In function
void foo(const long long unsigned int&)
:error: narrowing conversion of
(((long long unsigned int)i) & 4294967295ull)
fromlong long unsigned int
tounsigned int
inside { }error: narrowing conversion of
(((long long unsigned int)i) >> 32)
fromlong long unsigned int
tounsigned int
inside { }
Fortunately, the error messages were straightforward and the fix was simple:
void foo(const unsigned long long &i) { unsigned int a[2] = {static_cast<unsigned int>(i & 0xFFFFFFFF), static_cast<unsigned int>(i >> 32)}; }
The code was in an external library, with only two occurrences in one file. I don't think the breaking change will affect much code. Novices might get confused, though.
I would be surprised and disappointed in myself to learn that any of the C++ code I wrote in the last 12 years had this sort of problem. But most compilers would have spewed warnings about any compile-time "narrowings" all along, unless I'm missing something.
Are these also narrowing conversions?
unsigned short b[] = { -1, INT_MAX };
If so, I think they might come up a bit more often than your floating-type to integral-type example.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With