Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the difference between "(type)variable" and "*((type *)&variable)", if any?

I would like to know if there is a difference between:

  1. Casting a primitive variable to another primitive type
  2. Dereferencing a cast of a primitive variable's address to a pointer of another primitive type

I would also like to know if there is a good reason to ever use (2) over (1). I have seen (2) in legacy code which is why I was wondering. From the context, I couldn't understand why (2) was being favored over (1). And from the following test I wrote, I have concluded that at least the behavior of an upcast is the same in either case:

/* compile with gcc -lm */
#include <stdio.h>
#include <math.h>

int main(void)
{
    unsigned max_unsigned = pow(2, 8 * sizeof(unsigned)) - 1;

    printf("VALUES:\n");
    printf("%u\n", max_unsigned + 1);
    printf("%lu\n", (unsigned long)max_unsigned + 1);          /* case 1 */
    printf("%lu\n", *((unsigned long *)&max_unsigned) + 1);    /* case 2 */

    printf("SIZES:\n");
    printf("%d\n", sizeof(max_unsigned));
    printf("%d\n", sizeof((unsigned long)max_unsigned));       /* case 1 */
    printf("%d\n", sizeof(*((unsigned long *)&max_unsigned))); /* case 2 */

    return 0;
}

Output:

VALUES:
0
4294967296
4294967296
SIZES:
4
8
8

From my perspective, there should be no differences between (1) and (2), but I wanted to consult the SO experts for a sanity check.

like image 993
eeowaa Avatar asked Nov 18 '13 18:11

eeowaa


2 Answers

The first cast is legal; the second cast may not be legal.

The first cast tells the compiler to use the knowledge of the type of the variable to make a conversion to the desired type; the compiler does it, provided that a proper conversion is defined in the language standard.

The second cast tells the compiler to forget its knowledge of the variable's type, and re-interpret its internal representation as that of a different type *. This has limited applicability: as long as the binary representation matches that of the type pointed by the target pointer, this conversion will work. However, this is not equivalent to the first cast, because in this situation value conversion never takes place.

Switching the type of the variable being cast to something with a different representation, say, a float, illustrates this point well: the first conversion produces a correct result, while the second conversion produces garbage:

float test = 123456.0f;
printf("VALUES:\n");
printf("%f\n", test + 1);
printf("%lu\n", (unsigned long)test + 1);
printf("%lu\n", *((unsigned long *)&test) + 1); // Undefined behavior

This prints

123457.000000
123457
1206984705

(demo)


* This is valid only when one of the types is a character type and the pointer alignment is valid, type conversion is trivial (i.e. when there is no conversion), when you change qualifiers or signedness, or when you cast to/from a struct/union with the first member being a valid conversion source/target. Otherwise, this leads to undefined behavior. See C 2011 (N1570), 6.5 7, for complete description. Thanks, Eric Postpischil, for pointing out the situations when the second conversion is defined.
like image 115
Sergey Kalinichenko Avatar answered Nov 01 '22 13:11

Sergey Kalinichenko


Let's look at two simple examples, with int and float on modern hardware (no funny business).

float x = 1.0f;
printf("(int) x = %d\n", (int) x);
printf("*(int *) &x = %d\n", *(int *) &x);

Output, maybe... (your results may differ)

(int) x = 1
*(int *) &x = 1065353216

What happens with (int) x is you convert the value, 1.0f, to an integer.

What happens with *(int *) &x is you pretend that the value was already an integer. It was NOT an integer.

The floating point representation of 1.0 happens to be the following (in binary):

00111111 100000000 00000000 0000000

Which is the same representation as the integer 1065353216.

like image 4
Dietrich Epp Avatar answered Nov 01 '22 14:11

Dietrich Epp