Is there a standard (or MSVC proprietary) typedef for a signed type that can contain the full range of size_t
values? I.e. on a 64-bit system, it would be a 128-bit signed integer.
ssize_t is not a signed size_t . It is only guaranteed to support a signed value of -1, and while it might work on Posix, it is an unsigned type on Windows.
typedef /*implementation-defined*/ size_t; size_t is the unsigned integer type of the result of sizeof , _Alignof (since C11) and offsetof, depending on the data model. The bit width of size_t is not less than 16. (since C99)
No. size_t can and does differ from unsigned int .
size_t is unsigned because negative sizes make no sense.
It's not possible in general to define such a type. It's perfectly legal for an implementation to make size_t
the largest supported unsigned type, which would (almost certainly) mean that no signed type can hold all its values.
ptrdiff_t
is not necessarily wide enough. It's the result of subtracting two pointers, but there's nothing that says a pointer subtraction cannot overflow. See section 5.7 of the C++ standard:
When two pointers to elements of the same array object are subtracted, the result is the difference of the subscripts of the two array elements. The type of the result is an implementation-defined signed integral type; this type shall be the same type that is defined as
std::ptrdiff_t
in the<cstddef>
header (18.2). As with any other arithmetic overflow, if the result does not fit in the space provided, the behavior is undefined.
The largest signed type is intmax_t
, defined in <stdint.h>
or <cstdint>
. That's a C99 feature, and C++11 was the first C++ standard to incorporate the C99 standard library, so your compiler might not support it (and MSVC most likely doesn't). (9 years later: that's no longer much of an issue.) If there's a signed type wide enough to hold all possible values of type size_t
, then intmax_t
is (though there might be a narrower signed type that also qualifies).
You can also use long long
, which is a signed type guaranteed to be at least 64 bits (and most likely the same as intmax_t
). Even if it's not wide enough to hold all possible values of type size_t
, it will almost certainly hold all relevant values of type size_t
-- unless your implementation actually supports objects bigger than 8 exabytes (that's 8192 petabytes, or 8388608 terabytes).
(Note, I'm using the binary definitions of "exa-", "peta-", and "tera-", which are of questionable validity.)
If you want a standard type that can contain the maximum value of the system, maybe the <cstdint>
(since C++11) could help.
There's a typedef in that header that holds the maximum width integer type, the type is intmax_t
. The intmax_t
for signed integers, and the uintmax_t
for the unsigned ones are the largest integer fully supported by the architecture.
So, let's suppose you're in a 64bit architecture, the following instruction:
std::cout << "intmax_t is same int64_t? "
<< (std::is_same<intmax_t, int64_t>::value ? "Yes" : "No");
Will output:
intmax_t is same int64_t? Yes
Live demo.
Hope it helps.
I assume you need this type for some kind of pointer arithmetic. It is very unlikely that you need anything else than std::ptrdiff_t
. The only case where this will play a role on a modern machine is when you are in 32-bit mode and you are working on a data set with more than 2^31 bytes. (This won't even be possible on Windows without special work.) You won't be able to use two arrays of that size at the same time. In this case you should probably work in 64-bit mode anyways.
In 64-bit mode it will most likely not be a problem for the next 40 years or so with the current speed of memory development. And when it becomes a problem, then compile your code in 128-bit mode and it will continue to run. ;)
If you want a signed type that can hold every value of std::size_t
as positive values, I don't know of a way. Assuming you have the same number of bits, it takes one bit of information to store sign, so the new maximum is half the old. On the other hand, the upper half of values that used that bit are just wrapped into the negatives, so you can always cast back.
Really, what you probably need is to separate high unsigned / negative signed values from the others wherever you would cast them. If unsigned 0 <= x < M/2 <= y <= M
maps to 0 <= (x, y & (M/2)) < M/2
, then every value is accounted for, but won't wrap as x or y in either direction. It's the same if signed -M/2 <= y < 0 <= x < M/2
maps to 0 <= (x, y+M/2) < M
.
This way you know when x < 0
or y > M/2
that it's out of range to convert back, but in the meantime you can do comparisons like unsigned y(M) < y(M)+1
or signed x(0) > x(0)-1
that normally fail after wrapping, like 0 < -1 = M
, M > M+1 = 0
, etc.
For the record, I believe the corresponding signed type to std::size_t
is best computed, std::make_signed_t<std::size_t>
. Currently it is most likely long long
from unsigned long long
, but I don't know how universal that is or if it'll ever change. I recommend using std::numeric_limits<T>
from there to check the min/max when you get it anyway.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With