Possible Duplicate:
Why are these numbers not equal?
0.9 == 1-0.1 >>> TRUE
0.9 == 1.1-0.2 >>> FALSE
> all.equal(0.9,1.1-0.2)
[1] TRUE
> all.equal(0.9, 1.1-0.3)
[1] "Mean relative difference: 0.1111111"
> isTRUE(all.equal(0.9, 1.1-0.3)
[1] FALSE
and if used in code:
if(isTRUE(all.equal(0.9,1.1-0.2)) {
....
}
or in vectors:
> vec1=0.9
> vec2=c(1.1-0.2,1.3-0.4,1.0-0.2)
> mapply(function(...)isTRUE(all.equal(...)),vec1, vec2)
[1] TRUE TRUE FALSE
I recommend you read "what every computer scientist should know about floating point numbers". (or here).
Also Richie points out that the R faq mentions this issue. You should really read the whole of the R FAQ.
The problem you have encountered is that floating point cannot represent decimal fractions exactly in most cases, which means you will frequently find that exact matches fail.
while R lies slightly when you say:
> 1.1-0.2
[1] 0.9
> 0.9
[1] 0.9
You can find out what it really thinks in decimal:
> sprintf("%.54f",1.1-0.2)
[1] "0.900000000000000133226762955018784850835800170898437500"
> sprintf("%.54f",0.9)
[1] "0.900000000000000022204460492503130808472633361816406250"
You can see these numbers are different, but the representation is a bit unwieldy. If we look at them in binary (well, hex, which is equivalent) we get a clearer picture:
> sprintf("%a",0.9)
[1] "0x1.ccccccccccccdp-1"
> sprintf("%a",1.1-0.2)
[1] "0x1.ccccccccccccep-1"
> sprintf("%a",1.1-0.2-0.9)
[1] "0x1p-53"
You can see that they differ by 2^-53
, which is important because this number is the smallest representable difference between two numbers whose value is close to 1, as this is.
We can find out for any given computer what this smallest representable number is by looking in R's machine field:
> ?.Machine
....
double.eps the smallest positive floating-point number x
such that 1 + x != 1. It equals base^ulp.digits if either
base is 2 or rounding is 0; otherwise, it is
(base^ulp.digits) / 2. Normally 2.220446e-16.
....
> .Machine$double.eps
[1] 2.220446e-16
> sprintf("%a",.Machine$double.eps)
[1] "0x1p-52"
You can use this fact to create a 'nearly equals' function which checks that the difference is close to the smallest representable number in floating point. In fact this already exists (thanks to the commenter).
> ?all.equal
....
all.equal(x,y) is a utility to compare R objects x and y testing ‘near equality’.
....
all.equal(target, current,
tolerance = .Machine$double.eps ^ 0.5,
scale = NULL, check.attributes = TRUE, ...)
....
> all.equal(0.9,1.1-0.2)
[1] TRUE
So the all.equal function is actually checking that the difference between the numbers is the square root of the smallest difference between two mantissas.
This algorithm goes a bit funny near extremely small numbers called denormals, but you don't need to worry about that.
You need to be careful in programming when you want to test whether or not two computed numbers are equal. R will assume that you mean ‘exactly equal’, and what that means depends upon machine precision. Most numbers are rounded to 53 binary digits accuracy. Typically therefore, two floating point numbers will not reliably be equal unless they were computed by the same algorithm, and not always even then. You can see this by squaring the square root of 2: surely these values are the same?
x <- sqrt(2)
x * x == 2
[1] FALSE
We can see by how much the two values differ by subtraction:
1.1 - 0.2 - 0.9
[1] 1.110223e-16
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With