I've been reading the paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.
I have seen various ULP calculations and felt I pretty much understood it until the discussion of subtraction came up.
Take another example: 10.1 - 9.93. This
becomes
x = 1.01 × 10^1
y = 0.99 × 10^1
x - y = .02 × 10^1
The correct answer is .17, so the computed difference is off by 30 ulps
and is wrong in every digit!
Why is this 30ulps and not 0.3? Because surely the ulp is 0.01x10^1, or in other words 0.1. The error is 0.03, which is 0.3 ulps.
The correct answer is written in the comments but I'll write it down as an answer so that it's not hidden in the comment noise.
Basically x - y = .02 × 10^1
gets normalized to 2.00 × 10^-1
, and this means that the ULP is now the smallest value in this normalized representation: 0.01 × 10^-1
.
Consequently the error is
0.03 / [(0.01 × 10^-1)/ULP] = 30 ULP
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With