Linux's stddef.h defines offsetof()
as:
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
whereas the Wikipedia article on offsetof()
(http://en.wikipedia.org/wiki/Offsetof) defines it as:
#define offsetof(st, m) \
((size_t) ( (char *)&((st *)(0))->m - (char *)0 ))
Why subtract (char *)0
in the Wikipedia version? Is there any case where that would actually make a difference?
The first version converts a pointer into an integer with a cast, which is not portable.
The second version is more portable across a wider variety of compilers, because it relies on pointer arithmetic by the compiler to get an integer result instead of a typecast.
BTW, I was the editor that added the original code to the Wiki entry, which was the Linux form. Later editors changed it to the more portable version.
The standard does not require the NULL pointer to evaluate to the bit pattern 0 but can evaluate to a platform specific value.
Doing the subtraction guarantees that when converted to an integer value, NULL is 0.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With