I had thought that this would be an easy question resolve via Google, but I can't seem to find a definitive (or even speculative) answer:
When using a comparator statement, in which order does implicit casting occur?
int i = -1;
size_t t = 1;
bool result = i < t;
Is this equivalent to:
bool result = i < int(t); // equals true
or:
bool result = size_t(i) < t; // equals false
That is the easy part of the question - the second part is "what is the general rule", as it could be:
All three seem reasonable, although the second would yield significantly different behaviour to what most people would intuitively expect.
The VC++ compiler seems to think it's worth a level 3 warning when you compare an int with a size_t - and yet it only gives a level 4 warning when you return a negative number from a function that returns a size_t (which results in a number just over half the maximum integer being returned).
In an effort to get rid of all level 4 warnings, I now explicitly cast everything anyway, but I wanted to know "the truth". This must be defined somewhere...
The rules are fairly complex, and depend on the implementation. Basically, however:
Both types are "promoted". This means that anything
smaller than int
is promoted to int
. (In the unlikely case
that size_t
is smaller than int
, it will be promoted to
a signed int
, and loose its unsignedness.)
If one of the types can contain all of the values of the other, the other is converted to this type.
If one of the types is unsigned, and the other signed, and they have the same size, the signed is converted to the unsigned.
For int
and size_t
(which is required to be unsigned), this
means that unless size_t
is smaller than int
, the int
will
be converted to a size_t
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With