When could sizeof(size_t)
and sizeof(ptrdiff_t)
differ?
Is there any real-world example of this?
Note, I know that all the standard says about these types that they are implementation defined. But all the implementations I know, sizeof(size_t)
and sizeof(ptrdiff_t)
equals. Maybe for some weird (or not so weird) reason, they could differ.
It seems, that sizeof(ptrdiff_t)<sizeof(size_t)
is not a very useful case, as pointer arithmetic would be very limited for large arrays.
The other case, sizeof(ptrdiff_t)>sizeof(size_t)
could be slightly useful, as all pointer subtraction would be defined on arrays (if there is a larger array than PTRDIFF_MAX
, pointer subtraction would be defined for all distant elements, contrary to the usual case when sizeof(ptrdiff_t)==sizeof(size_t)
). Is there any real implementation which does this? Does this approach has any other useful properties?
A reasonable case would be the classic 8086. With array sizes limited to 64K segments, size_t
can be 16 bits. However, to support a ptrdiff_t
with a 128 kB range (-64kB to +64kB), you'd need 17 bits.
The problem here is that the pointer type isn't a trivial byte counter which wraps around after 64 kB.
Here's an excerpt from stddef.h
of Borland C++ 3.1:
#if defined(__LARGE__) || defined(__HUGE__) || defined(__COMPACT__)
typedef long ptrdiff_t;
#else
typedef int ptrdiff_t;
#endif
typedef unsigned size_t;
So, if the memory model is large, huge, or compact (these memory models mean that data can be larger than 64KiB), then sizeof(ptrdiff_t)
is 4 , while sizeof(size_t)
is always 2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With