Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Size of int and float

I have a question about the ranges of ints and floats:

If they both have the same size of 4 bytes, why do they have different ranges?

like image 251
atul Avatar asked Aug 16 '11 14:08

atul


People also ask

Are int and float the same size?

They are totally different - typically int is just a straightforward 2's complement signed integer, while float is a single precision floating point representation with 23 bits of mantissa, 8 bits exponent and 1 bit sign (see http://en.wikipedia.org/wiki/IEEE_754-2008).

What is the size of float?

If the macro is defined, not only are the types better specified ( float being 32bits and double being 64bits among others), but also the behavior of builtin operators and standard-functions is more specified.

Which is bigger float or int?

For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.

What is the size of int?

The size of a signed int or unsigned int item is the standard size of an integer on a particular machine. For example, in 16-bit operating systems, the int type is usually 16 bits, or 2 bytes. In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes.


3 Answers

They are totally different - typically int is just a straightforward 2's complement signed integer, while float is a single precision floating point representation with 23 bits of mantissa, 8 bits exponent and 1 bit sign (see http://en.wikipedia.org/wiki/IEEE_754-2008).

like image 186
Paul R Avatar answered Sep 19 '22 03:09

Paul R


They have different ranges of values because their contents are interpreted differently; in other words, they have different representations.

Floats and doubles are typically represented as something like

+-+-------+------------------------+
| |       |                        |
+-+-------+------------------------+
 ^    ^                ^
 |    |                |
 |    |                +--- significand
 |    +-- exponent
 |
 +---- sign bit

where you have 1 bit to represent the sign s (0 for positive, 1 for negative), some number of bits to represent an exponent e, and the remaining bits for a significand, or fraction f. The value is being represented is s * f * 2e.

The range of values that can be represented is determined by the number of bits in the exponent; the more bits in the exponent, the wider the range of possible values.

The precision (informally, the size of the gap between representable values) is determined by the number of bits in the significand. Not all floating-point values can be represented exactly in a given number of bits. The more bits you have in the significand, the smaller the gap between any two representable values.

Each bit in the significand represents 1/2n, where n is the bit number counting from the left:

 110100...
 ^^ ^
 || |  
 || +------ 1/2^4 = 0.0625
 || 
 |+-------- 1/2^2 = 0.25
 |
 +--------- 1/2^1 = 0.5
                    ------
                    0.8125

Here's a link everyone should have bookmarked: What Every Computer Scientist Should Know About Floating Point Arithmetic.

like image 24
John Bode Avatar answered Sep 19 '22 03:09

John Bode


Two types with the same size in bytes can have different ranges for sure.

For example, signed int and unsigned int are both 4 bytes, but one has one of its 32 bits reserved for the sign, which lowers the maximum value by a factor of 2 by default. Also, the range is different because the one can be negative. Floats on the other hand lose value range in favor of using some bits for decimal range.

like image 20
John Humphreys Avatar answered Sep 21 '22 03:09

John Humphreys