Question is quite simple.
On 32bit systems:
std::cout << sizeof(unsigned int); //4
std::cout << sizeof(unsigned long long); //8
std::cout << sizeof(std::size_t); //4
On 64bit systems:
std::cout << sizeof(unsigned int); //4
std::cout << sizeof(unsigned long long); //8
std::cout << sizeof(std::size_t); //8
I only checked the implementation that MSVC has, and it looks like this:
#ifdef _WIN64
typedef unsigned __int64 size_t;
#else
typedef unsigned int size_t;
#endif
So why not make std::size_t
unsigned long long
(std::uintmax_t
) on both 32bit and 64bit systems when they clearly support it? Or am I wrong in that?
size_t type The type's size is chosen so that it can store the maximum size of a theoretically possible array of any type. On a 32-bit system size_t will take 32 bits, on a 64-bit one 64 bits. In other words, a variable of size_t type can safely store a pointer.
The size_t type is the type returned by the “sizeof” operator. This, in our case, happens to be unsigned int. It is an unsigned integer that can express the size of any memory range supported on the our machine. It may as well be unsigned long or unsigned long long.
std::size_t is the unsigned integer type of the result of the sizeof operator as well as the sizeof... operator and the alignof operator (since C++11). The bit width of std::size_t is not less than 16.
The point of size_t
is to be able to hold the size of the biggest possible object. On a 32 bit system no object can occupy more than 2**32 bytes, so a 32 bit type is sufficient.
To use a 64 bit type would be wasteful of space and potentially more expensive in run time.
That would be a pointless waste. On a 32 bit machine you have a 4 GB address space, so you cannot have objects bigger than 4 GB, so the range of a 32 bit size_t
is perfectly adequate.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With