I would like to know if there is a difference between:
I would also like to know if there is a good reason to ever use (2) over (1). I have seen (2) in legacy code which is why I was wondering. From the context, I couldn't understand why (2) was being favored over (1). And from the following test I wrote, I have concluded that at least the behavior of an upcast is the same in either case:
/* compile with gcc -lm */
#include <stdio.h>
#include <math.h>
int main(void)
{
unsigned max_unsigned = pow(2, 8 * sizeof(unsigned)) - 1;
printf("VALUES:\n");
printf("%u\n", max_unsigned + 1);
printf("%lu\n", (unsigned long)max_unsigned + 1); /* case 1 */
printf("%lu\n", *((unsigned long *)&max_unsigned) + 1); /* case 2 */
printf("SIZES:\n");
printf("%d\n", sizeof(max_unsigned));
printf("%d\n", sizeof((unsigned long)max_unsigned)); /* case 1 */
printf("%d\n", sizeof(*((unsigned long *)&max_unsigned))); /* case 2 */
return 0;
}
Output:
VALUES:
0
4294967296
4294967296
SIZES:
4
8
8
From my perspective, there should be no differences between (1) and (2), but I wanted to consult the SO experts for a sanity check.
The first cast is legal; the second cast may not be legal.
The first cast tells the compiler to use the knowledge of the type of the variable to make a conversion to the desired type; the compiler does it, provided that a proper conversion is defined in the language standard.
The second cast tells the compiler to forget its knowledge of the variable's type, and re-interpret its internal representation as that of a different type *. This has limited applicability: as long as the binary representation matches that of the type pointed by the target pointer, this conversion will work. However, this is not equivalent to the first cast, because in this situation value conversion never takes place.
Switching the type of the variable being cast to something with a different representation, say, a float
, illustrates this point well: the first conversion produces a correct result, while the second conversion produces garbage:
float test = 123456.0f;
printf("VALUES:\n");
printf("%f\n", test + 1);
printf("%lu\n", (unsigned long)test + 1);
printf("%lu\n", *((unsigned long *)&test) + 1); // Undefined behavior
This prints
123457.000000
123457
1206984705
(demo)
struct
/union
with the first member being a valid conversion source/target. Otherwise, this leads to undefined behavior. See C 2011 (N1570), 6.5 7, for complete description. Thanks, Eric Postpischil, for pointing out the situations when the second conversion is defined.
Let's look at two simple examples, with int
and float
on modern hardware (no funny business).
float x = 1.0f;
printf("(int) x = %d\n", (int) x);
printf("*(int *) &x = %d\n", *(int *) &x);
Output, maybe... (your results may differ)
(int) x = 1 *(int *) &x = 1065353216
What happens with (int) x
is you convert the value, 1.0f
, to an integer.
What happens with *(int *) &x
is you pretend that the value was already an integer. It was NOT an integer.
The floating point representation of 1.0 happens to be the following (in binary):
00111111 100000000 00000000 0000000
Which is the same representation as the integer 1065353216.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With