So I have been trying to wrap by head around the relation between the number of significant digits in a floating point number and the relative loss of precision, but I just can't seem to make sense of it. I was reading an article earlier that said to do the following:
So why is this 128 when there are 10 significant digits? I understand how floats are stored (1 bit for sign, 8 bits for exponent, 23 bits for mantissa) and understand how you will lose precision if you assume that all integers will automatically find exact homes in a float data structure, but I don't understand where the 128 comes from. My intuition tells me that I'm on the right track, but I'm hoping that someone may be able to clear this up for me.
I initially thought that the distance between possible floats was 2 ^ (n-1) where n was the number of significant digits, but this did not hold true.
Thank you!
The distance between two floating point numbers depends on the exponent. The smaller the exponent, the smaller the difference between one floating point number and the next. The next thing to consider is that the exponent stored in floating point numbers is a binary exponent, not a decimal exponent, so in the case of floating point numbers, decimal precision is less important than binary precision of the number. Figure 9.1 of this document explains the concept pretty well.
The "distance" between two adjacent floating point numbers is 2^(1-n+e), where e is the true exponent and n the number of bits in the mantissa (AKA significand). The exponent stored is not the true exponent, it has a bias. For IEEE-754 floats, this is 127 (for normalized numbers). So, as Peter O said, the distance depends on the exponent.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With