I'm trying to port my code to 64bit.
I found that C++ provides 64bit integer types, but I'm still confused about it.
First, I found four different 64bit int
s:
int_least64_t int_fast64_t int64_t intmax_t
and their unsigned counterparts. I tested them using sizeof()
and they are 8 byte so they are 64bit.
What's the different between them? What is the meaning of the least
and fast
types? What about intmax_t
?
On your platform, they're all names for the same underlying data type. On other platforms, they aren't.
int64_t
is required to be EXACTLY 64 bits. On architectures with (for example) a 9-bit byte, it won't be available at all.
int_least64_t
is the smallest data type with at least 64 bits. If int64_t
is available, it will be used. But (for example) with a 9-bit byte machine, this could be 72 bits.
int_fast64_t
is the data type with at least 64 bits and the best arithmetic performance. It's there mainly for consistency with int_fast8_t
and int_fast16_t
, which on many machines will be 32 bits, not 8 or 16. In a few more years, there might be an architecture where 128-bit math is faster than 64-bit, but I don't think any exists today.
If you're porting an algorithm, you probably want to be using int_fast32_t
, since it will hold any value your old 32-bit code can handle, but will be 64-bit if that's faster. If you're converting pointers to integers (why?) then use intptr_t
.
int64_t
has exactly 64 bits. It might not be defined for all platforms.
int_least64_t
is the smallest type with at least 64 bits.
int_fast64_t
is the type that's fastest to process, with at least 64 bits.
On a 32 or 64-bit processor, they will all be defined, and will all have 64 bits. On a hypothetical 73-bit processor, int64_t
won't be defined (since there is no type with exactly 64 bits), and the others will have 73 bits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With