My friend says he read it on some page on SO that they are different,but how could the two be possibly different?
Case 1
int i=999;
char c=i;
Case 2
char c=999;
In first case,we are initializing the integer i
to 999
,then initializing c
with i
,which is in fact 999
.In the second case, we initialize c
directly with 999
.The truncation and loss of information aside, how on earth are these two cases different?
EDIT
Here's the link that I was talking of
why no overflow warning when converting int to char
One member commenting there says --It's not the same thing. The first is an assignment, the second is an initialization
So isn't it a lot more than only a question of optimization by the compiler?
They have the same semantics.
The constant 999
is of type int
.
int i=999;
char c=i;
i
created as an object of type int
and initialized with the int
value 999
, with the obvious semantics.
c
is created as an object of type char
, and initialized with the value of i
, which happens to be 999
. That value is implicitly converted from int
to char
.
The signedness of plain char
is implementation-defined.
If plain char
is an unsigned type, the result of the conversion is well defined. The value is reduced modulo CHAR_MAX+1
. For a typical implementation with 8-bit bytes (CHAR_BIT==8
), CHAR_MAX+1
will be 256, and the value stored will be 999 % 256
, or 231
.
If plain char
is a signed type, and 999
exceeds CHAR_MAX
, the conversion yields an implementation-defined result (or, starting with C99, raises an implementation-defined signal, but I know of no implementations that do that). Typically, for a 2's-complement system with CHAR_BIT==8
, the result will be -25
.
char c=999;
c
is created as an object of type char
. Its initial value is the int
value 999
converted to char
-- by exactly the same rules I described above.
If CHAR_MAX >= 999
(which can happen only if CHAR_BIT
, the number of bits in a byte, is at least 10), then the conversion is trivial. There are C implementations for DSPs (digital signal processors) with CHAR_BIT
set to, for example, 32. It's not something you're likely to run across on most systems.
You may be more likely to get a warning in the second case, since it's converting a constant expression; in the first case, the compiler might not keep track of the expected value of i
. But a sufficiently clever compiler could warn about both, and a sufficiently naive (but still fully conforming) compiler could warn about neither.
As I said above, the result of converting a value to a signed type, when the source value doesn't fit in the target type, is implementation-defined. I suppose it's conceivable that an implementation could define different rules for constant and non-constant expressions. That would be a perverse choice, though; I'm not sure even the DS9K does that.
As for the referenced comment "The first is an assignment, the second is an initialization", that's incorrect. Both are initializations; there is no assignment in either code snippet. There is a difference in that one is an initialization with a constant value, and the other is not. Which implies, incidentally, that the second snippet could appear at file scope, outside any function, while the first could not.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With