Some people say that machine epsilon for double precision floating point numbers is 2^-53 and other (more commonly) say its 2^-52. I have messed around estimating machine precision using integers besides 1 and aproaching from above and below (in matlab), and have gotten both values as results. Why is it that both values can be observed in practice? I thought that it should always produce an epsilon around 2^-52.
There's an inherent ambiguity about the term "machine epsilon", so to fix this, it is commonly defined to be the difference between 1
and the next bigger representable number. (This number is actually (and not by accident) obtained by literally incrementing the binary representation by one.)
The IEEE754 64-bit float has 52 explicit mantissa bits, so 53 including the implicit leading 1
. So the two consecutive numbers are:
1.0000 ..... 0000
1.0000 ..... 0001
\-- 52 digits --/
So the difference betwen the two is 2-52.
It depends on which way you round.
1 + 2^-53
is exactly half way between 1
and 1 + 2^-52
, which are consecutive in double-precision floating point. So if you round it up, it is different from 1; if you round it down, it is equal to 1.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With