Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to calculate decimal digits of precision based on the number of bits?

I am learning about floating point formats (IEEE). In the single precision floating point format ,it is mentioned that the mantissa has 24 bits and so it has 6 1/2 decimal digits of precision (as the per the book "understanding the machine") , and 7.22 decimal digits of precision.

I don't understand how the decimal digits of precision is calculated. Can somebody please enlighten me ?

like image 942
Yogi Avatar asked May 07 '12 14:05

Yogi


1 Answers

With 24 bits, assuming one bit is reserved for the sign, then the largest decimal number you can represent is 2^23-1=8388607. That is, you can get 6 digits and sometimes a 7th. This is often expressed as "6 1/2 digits". If the 24 bits are representing an unsigned number, then the maximum value you can store is 2^24-1=16,777,215, or 7 and a fraction digits.

When someone quotes you a number with explicit decimal places like 7.22 decimal digits, what they're doing is taking the log (base 10) of the maximum value. So log(16777115)=7.22.

In general, the number of decimal digits you'll get from a given number of bits is:

d=log[base 10](2^b)

where b is the number of bits and d is the number of decimal digits. Then:

d=b * log(2)
d~=b * .3010

So 24 bits gives 24 * .3010 = 7.224

like image 91
Jay Avatar answered Sep 28 '22 01:09

Jay