Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Float and Int Both 4 Bytes? How Come?

People also ask

Why size of float is 4 bytes?

An int and float usually take up "one-word" in memory. Today, with the shift to 64bit systems this may mean that your word is 64 bits, or 8 bytes, allowing the representation of a huge span of numbers. Or, it could still be a 32bit system meaning each word in memory takes up 4 bytes.

Are floats always 4 bytes?

Yes it has 4 bytes only but it is not guaranteed.

Why does an integer take 4 bytes?

So the reason why you are seeing an int as 4 bytes (32 bits), is because the code is compiled to be executed efficiently by a 32-bit CPU. If the same code were compiled for a 16-bit CPU the int may be 16 bits, and on a 64-bit CPU it may be 64 bits.


Well, here's a quick explanation:

An int and float usually take up "one-word" in memory. Today, with the shift to 64bit systems this may mean that your word is 64 bits, or 8 bytes, allowing the representation of a huge span of numbers. Or, it could still be a 32bit system meaning each word in memory takes up 4 bytes. Typically memory can be accessed on a word by word basis.

The difference between int and float is not their physical space in memory, but in the way the ALU (Arithmetic Logic Unit) behaves with the number. An int represents its directly corresponding number in binary (well, almost--it uses two's complement notation). A float on the other hand is encoded (typically in IEEE 754 standard format) to represent a number in exponential form (i.e. 2.99*10^6 is in exponential form).

Your misunderstanding I think lies in the misconception that a floating point can represent more information. While floats can represent numbers of greater magnitude, it cannot represent them with as much accuracy, because it has to account for encoding the exponent. The exponent itself could be quite a large number. So the number of significant digits you get out of a floating point number is less (which means less information is represented) and whereas ints represent a range of integers, the magnitude of numbers they represent is much smaller.


I just find it highly surprising that something which represents (virtually) the entire real line is of the same size as that of which represents the Integers.

Perhaps this will become less surprising once you realize that there are lots of integers that a 32-bit int can represent exactly, and a 32-bit float can't.

A float can represent fewer distinct numbers than an int, but they're spread over a wider range.

It is also worth noting that the spacing between consecutive floats becomes wider as one moves away from zero, whereas it remains constant for consecutive ints.


I believe the important point here is that int is exact while float may be rounded. A portion of the data in a float describes the location of the decimal, while another portion determines the value. So while you may be able to show 1.2E38, only the first several digits may be correct and the rest may be filled with 0's.

From: http://en.wikipedia.org/wiki/Floating_point

"with seven decimal digits could in addition represent 1.234567, 123456.7, 0.00001234567, 1234567000000000, and so on"

It depends on how the particular system implements floats.