I have a question about the ranges of ints and floats:
If they both have the same size of 4 bytes, why do they have different ranges?
They are totally different - typically int is just a straightforward 2's complement signed integer, while float is a single precision floating point representation with 23 bits of mantissa, 8 bits exponent and 1 bit sign (see http://en.wikipedia.org/wiki/IEEE_754-2008).
If the macro is defined, not only are the types better specified ( float being 32bits and double being 64bits among others), but also the behavior of builtin operators and standard-functions is more specified.
For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.
The size of a signed int or unsigned int item is the standard size of an integer on a particular machine. For example, in 16-bit operating systems, the int type is usually 16 bits, or 2 bytes. In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes.
They are totally different - typically int
is just a straightforward 2's complement signed integer, while float
is a single precision floating point representation with 23 bits of mantissa, 8 bits exponent and 1 bit sign (see http://en.wikipedia.org/wiki/IEEE_754-2008).
They have different ranges of values because their contents are interpreted differently; in other words, they have different representations.
Floats and doubles are typically represented as something like
+-+-------+------------------------+
| | | |
+-+-------+------------------------+
^ ^ ^
| | |
| | +--- significand
| +-- exponent
|
+---- sign bit
where you have 1 bit to represent the sign s (0 for positive, 1 for negative), some number of bits to represent an exponent e, and the remaining bits for a significand, or fraction f. The value is being represented is s * f * 2e.
The range of values that can be represented is determined by the number of bits in the exponent; the more bits in the exponent, the wider the range of possible values.
The precision (informally, the size of the gap between representable values) is determined by the number of bits in the significand. Not all floating-point values can be represented exactly in a given number of bits. The more bits you have in the significand, the smaller the gap between any two representable values.
Each bit in the significand represents 1/2n, where n is the bit number counting from the left:
110100...
^^ ^
|| |
|| +------ 1/2^4 = 0.0625
||
|+-------- 1/2^2 = 0.25
|
+--------- 1/2^1 = 0.5
------
0.8125
Here's a link everyone should have bookmarked: What Every Computer Scientist Should Know About Floating Point Arithmetic.
Two types with the same size in bytes can have different ranges for sure.
For example, signed int and unsigned int are both 4 bytes, but one has one of its 32 bits reserved for the sign, which lowers the maximum value by a factor of 2 by default. Also, the range is different because the one can be negative. Floats on the other hand lose value range in favor of using some bits for decimal range.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With