Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:
signed
and unsigned
typesshort
, int
, long
, long long
int32_t
vs int
(and I know that int32_t
is not part of the language)How would you summarize when one should use each of them?
Make your choice of the integer type depend on the range of numbers you want to keep in the variable. Chose the integer type that is the best fit for your number range. If you only need positive integers use unsigned. It's that simple.
int is the integer type which offers the fastest processing speeds. The initial (default) value for integers is 0 , and for floats this is 0.0 A float32 is reliably accurate to about 7 decimal places, a float64 to about 15 decimal places.
So the reason why you are seeing an int as 4 bytes (32 bits), is because the code is compiled to be executed efficiently by a 32-bit CPU. If the same code were compiled for a 16-bit CPU the int may be 16 bits, and on a 64-bit CPU it may be 64 bits.
The __int8 data type is synonymous with type char , __int16 is synonymous with type short , and __int32 is synonymous with type int . The __int64 type is synonymous with type long long .
The default integral type (int
) gets a "first among equals" preferential treatment in pretty much all languages. So we can use that as a default, if no reasons to prefer another type exist.
Such reasons might be:
<<
and >>
).int32_t
) -- if your program is meant to be portable and expected to be compiled with different compilers, this becomes more important.Update (expanding on guaranteed size types)
My personal opinion is that types with no guaranteed fixed size are more trouble than worth today. I won't go into the historical reasons that gave birth to them (briefly: source code portability), but the reality is that in 2011 very few people, if any, stand to benefit from them.
On the other hand, there are lots of things that can go wrong when using such types:
For these reasons (and there are probably others too), using such types is in theory a major pain. Additionally, unless extreme portability is a requirement, you don't stand to benefit at all to compensate. And indeed, the whole purpose of typedefs like int32_t
is to eliminate usage of loosely sized types entirely.
As a practical matter, if you know that your program is not going to be ported to another compiler or architecture, you can ignore the fact that the types have no fixed size and treat them as if they are the known size your compiler uses for them.
One by one to your questions:
signed
and unsigned
: depends on what you need. If you're sure, that the number will be unsigned - use unsigned
. This will give you the opportunity to use bigger numbers. For example, a signed char
(1B) has range [-128:127], but if it's unsigned
- the max value is doubled (you have one more bit to use - sign bit, so unsigned char
could be 255 (all bits are 1)
short
, int
, long
, long long
- these are pretty clear, aren't it? The smallest integer (except char
) is short
, next one is int
, etc. But these ones are platform dependent - int
could be 2B (long long ago :D ), 4B (usually). long
could be 4B (in 32bit platform), or 8B (on 64bit platform), etc. long long
is not standard type in C++ (it will be in C++0x), but usually it's a typedef for int64_t
.
int32_t
vs int
- int32_t
and other types like this guarantee their size. For example, int32_t
is guaranteed to be 32bit, while, as I already said, the size of int
is platform dependent.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With