Until recently I believed that 'long' was the same thing as 'int' because of historical reasons and desktop processors all having at least 32 bits (and had troubles only with that "dupe" since only developing on 32 bits machines).
Reading this, I discover that, in fact, the C standard defines the int type to be at least an int16, while 'long' is supposed to be at least an int32.
In fact in the list
there are always non-empty intersections, and therefore a duplicate, whatever implementation the compiler and platform choose.
Why did the standard commitee introduced an extra type among what could be as simple as char/short/int/long (or int_k, int_2k, int_4k, int_8k)?
Was that for historical reasons, like, gcc x.x implemented int as 32 bits while another compiler implemented it as 16, or is there a real technical reason I'm missing?
The central point is that int/unsigned
is not just another step of integer sizes from char, short,int, long, long long
ladder. int
is special. It is the size that all narrower types promote to and so typically works "best" on a given processor. So should int
match short
, long
or is wedged distinctly between short/long
is highly platform dependent.
C is designed to accommodate a wide range of processors. Given that C is 40+ years old is testament to a successfully strategy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With