Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is using unsigned integer overflow good practice?

Tags:

People also ask

Is it good practice for unsigned int?

The Google C++ style guide recommends avoiding unsigned integers except in situations that definitely require it (for example: file formats often store sizes in uint32_t or uint64_t -- no point in wasting a signedness bit that will never be used).

Can overflow happen with unsigned numbers?

"A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."

What happens when an unsigned integer overflows?

When an unsigned arithmetic operation produces a result larger than the maximum above for an N-bit integer, an overflow reduces the result to modulo N-th power of 2, retaining only the least significant bits of the result and effectively causing a wrap around.

Why are unsigned integers bad?

The big problem with unsigned int is that if you subtract 1 from an unsigned int 0, the result isn't a negative number, the result isn't less than the number you started with, but the result is the largest possible unsigned int value.


I was reading the C Standard the other day, and noticed that unlike signed integer overflow (which is undefined), unsigned integer overflow is well defined. I've seen it used in a lot of code for maximums, etc. but given the voodoos about overflow, is this considered good programming practice? Is it in anyway insecure? I know that a lot of modern languages like Python do not support it- instead they continue to extend the size of large numbers.