The maximum value of an n
-bit integer is 2n-1. Why do we have the "minus 1"? Why isn't the maximum just 2n?
The XDR standard defines signed integers as integer. A signed integer is a 32-bit datum that encodes an integer in the range [-2147483648 to 2147483647]. An unsigned integer is a 32-bit datum that encodes a nonnegative integer in the range [0 to 4294967295].
The number 2,147,483,647 (or hexadecimal 7FFFFFFF16) is the maximum positive value for a 32-bit signed binary integer in computing. It is therefore the maximum value for variables declared as integers (e.g., as int ) in many programming languages.
Since, range of unsigned binary number is from 0 to (2n-1). Therefore, range of 5 bit unsigned binary number is from 0 to (25-1) which is equal from minimum value 0 (i.e., 00000) to maximum value 31 (i.e., 11111).
The -1
is because integers start at 0, but our counting starts at 1.
So, 2^32-1
is the maximum value for a 32-bit unsigned integer (32 binary digits). 2^32
is the number of possible values.
To simplify why, look at decimal. 10^2-1
is the maximum value of a 2-digit decimal number (99). Because our intuitive human counting starts at 1, but integers are 0-based, 10^2
is the number of values (100).
2^32
in binary:
1 00000000 00000000 00000000 00000000
2^32 - 1
in binary:
11111111 11111111 11111111 11111111
As you can see, 2^32
takes 33
bits, whereas 2^32 - 1
is the maximum value of a 32
bit integer.
The reason for the seemingly "off-by-one" error here is that the lowest bit represents a one, not a two. So the first bit is actually 2^0
, the second bit is 2^1
, etc...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With