Until recently, I'd considered the decision by most systems implementors/vendors to keep plain int
32-bit even on 64-bit machines a sort of expedient wart. With modern C99 fixed-size types (int32_t
and uint32_t
, etc.) the need for there to be a standard integer type of each size 8, 16, 32, and 64 mostly disappears, and it seems like int
could just as well be made 64-bit.
However, the biggest real consequence of the size of plain int
in C comes from the fact that C essentially does not have arithmetic on smaller-than-int
types. In particular, if int
is larger than 32-bit, the result of any arithmetic on uint32_t
values has type signed int
, which is rather unsettling.
Is this a good reason to keep int
permanently fixed at 32-bit on real-world implementations? I'm leaning towards saying yes. It seems to me like there could be a huge class of uses of uint32_t
which break when int
is larger than 32 bits. Even applying the unary minus or bitwise complement operator becomes dangerous unless you cast back to uint32_t
.
Of course the same issues apply to uint16_t
and uint8_t
on current implementations, but everyone seems to be aware of and used to treating them as "smaller-than-int
" types.
A 64-bit signed integer. It has a minimum value of -9,223,372,036,854,775,808 and a maximum value of 9,223,372,036,854,775,807 (inclusive). A 64-bit unsigned integer.
A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used.
They are guaranteed to be a minimnum of 64 bits. It's theoretically possible that they could be larger (e.g., 128 bits) though I'm reasonably they're only 64 bits on anything currently available.
An 8-bit unsigned integer has a range of 0 to 255, while an 8-bit signed integer has a range of -128 to 127 - both representing 256 distinct numbers. It is important to note that a computer memory location merely stores a binary pattern.
As you say, I think that the promotion rules really are the killer. uint32_t
would then promote to int
and all of a sudden you'd have signed arithmetic where almost everybody expects unsigned.
This would be mostly hidden in places where you do just arithmetic and assign back to an uint32_t
. But it could be deadly in places where you do comparison to constants. Whether code that relies on such comparisons without doing an explicit cast is reasonable, I don't know. Casting constants like (uint32_t)1
can become quite tedious. I personally at least always use the suffix U
for constants that I want to be unsigned, but this already is not as readable as I would like.
Also have in mind that uint32_t
etc are not guaranteed to exist. Not even uint8_t
. The enforcement of that is an extension from POSIX. So in that sense C as a language is far from being able to make that move.
"Reasonable Code"...
Well... the thing about development is, you write and fix it and then it works... and then you stop!
And maybe you've been burned a lot so you stay well within the safe ranges of certain features, and maybe you haven't been burned in that particular way so you don't realize that you're relying on something that could kind-of change.
Or even that you're relying on a bug.
On olden Mac 68000 compilers, int was 16 bit and long was 32. But even then most extant C code assumed an int was 32, so typical code you found on a newsgroup wouldn't work. (Oh, and Mac didn't have printf, but I digress.)
So, what I'm getting at is, yes, if you change anything, then some things will break.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With