In a lot of code examples, source code, libraries etc. I see the use of int when as far as I can see, an unsigned int
would make much more sense.
One place I see this a lot is in for
loops. See below example:
for(int i = 0; i < length; i++) { // Do Stuff }
Why on earth would you use an int
rather than an unsigned int
? Is it just laziness - people can't be bothered with typing unsigned
?
An unsigned Integer means the variable can hold only a positive value. This format specifier is used within the printf() function for printing the unsigned integer variables.
The int type in C is a signed integer, which means it can represent both negative and positive numbers. This is in contrast to an unsigned integer (which can be used by declaring a variable unsigned int), which can only represent positive numbers.
The Google C++ style guide recommends avoiding unsigned integers except in situations that definitely require it (for example: file formats often store sizes in uint32_t or uint64_t -- no point in wasting a signedness bit that will never be used).
An int is signed by default, meaning it can represent both positive and negative values. An unsigned is an integer that can never be negative.
Using unsigned
can introduce programming errors that are hard to spot, and it's usually better to use signed int
just to avoid them. One example would be when you decide to iterate backwards rather than forwards and write this:
for (unsigned i = 5; i >= 0; i--) { printf("%d\n", i); }
Another would be if you do some math inside the loop:
for (unsigned i = 0; i < 10; i++) { for (unsigned j = 0; j < 10; j++) { if (i - j >= 4) printf("%d %d\n", i, j); } }
Using unsigned
introduces the potential for these sorts of bugs, and there's not really any upside.
It's generally laziness or lack of understanding.
I aways use unsigned int when the value should not be negative. That also serves the documentation purpose of specifying what the correct values should be.
IMHO, the assertion that it is safer to use "int" than "unsigned int" is simply wrong and a bad programming practice.
If you have used Ada or Pascal you'd be accustomed to using the even safer practice of specifying specific ranges for values (e.g., an integer that can only be 1, 2, 3, 4, 5).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With