Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can someone explain this floating-point behavior?

Inspired by this question, I was trying to find out what exactly happens there (my answer was more intuitive, but I cannot exactly understand the why of it).

I believe it comes down to this (running 64 bit Python):

>>> sys.maxint
9223372036854775807
>>> float(sys.maxint)
9.2233720368547758e+18

Python uses the IEEE 754 floating-point representation, which effectively has 53 bits for the significant. However, as far as I understand it, the significant in the above example would require 57 bits (56 if you drop the implied leading 1) to be represented. Can someone explain this discrepancy?

like image 730
Björn Pollex Avatar asked Jan 21 '23 04:01

Björn Pollex


1 Answers

Perhaps the following will help clear things up:

>>> hex(int(float(sys.maxint)))
'0x8000000000000000L'

This shows that float(sys.maxint) is in fact a power of 2. Therefore, in binary its mantissa is exactly 1. In IEEE 754 the leading 1. is implied, so in the machine representation this number's mantissa consists of all zero bits.

In fact, the IEEE bit pattern representing this number is as follows:

0x43E0000000000000

Observe that only the first three nibbles (the sign and the exponent) are non-zero. The significand consists entirely of zeroes. As such it doesn't require 56 (nor indeed 53) bits to be represented.

like image 105
NPE Avatar answered Jan 22 '23 16:01

NPE