Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Floating Point Arithmetic: Deriving Wobble [duplicate]

I am going through David Goldberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic. I have no formal background in numerical analysis and am having a hard time understanding the paper. In the section Relative Error and Ulps, he goes on to derive the upper-bound of the relative error when approximating a real number with the close FP number. So corresponding to .5 ULPs, when a real number is approximated by a FP number d.ddd...d x β e, the absolute error is ((β/2)β-p) x βe. He says that numbers of the form d.ddd...d x βe have values that range from βe to β x βe. I don't understand how this range comes from. To find the relative error, I need to divide by the actual real number that I am approximating. Why is he dividing by the values that the FP number can take? What am I missing?

Further, I am struggling to understand the significance of wobble. A few paragraphs later, he demonstrates this relationship by taking a real-number x and then approximating it with a FP number. Then finding the error in ULPs and in relative. Then multiplies the real number by 8 (and the FP approximation as well). The error when measured in ULPs increases but the relative error remains the same.

Somehow I fail to develop an intuition for this relationship. Where is it useful?

like image 997
Prashant Pandey Avatar asked Mar 01 '26 06:03

Prashant Pandey


1 Answers

So corresponding to .5 ULPs, when a real number is approximated by a FP number d.ddd...d x βe, the absolute error is ((β/2)β-p) x βe.

Not quite, it says when a real number is approximated by the closest floating-point number, the absolute error can be as large as ((β/2)β-p) x βe, not that it is that value.

He says that numbers of the form d.ddd...d x βe have values that range from βe to β x βe. I don't understand how this range comes from.

That is because that first digit d is always some digit from 1 to β−1. If the first digit were 0, we would adjust the exponent e down one to bring more digits up. If there were two or more digits before before the radix point, we would adjust e up to push digits down. For example, we are not going to represent 12345 as .012345•106 or as 12.345•103; we use 1.2345•104. The significand in Goldberg’s format is always at least one and less than β. Since the significand S satisfies 1 ≤ S < β, the (positive) number represented satisfies 1•βeS < β•βe.

Further, I am struggling to understand the significance of wobble.

Consider all the real numbers between 10,000 (inclusive) and 100,000 (exclusive). With a base-10 five-digit floating-point, these all have an ULP of 1. When we convert 10,000.7 to this format, the closest number is 10,001, so the absolute error is .3, the ULP error is .3, and the relative error is .3 / 10,000.7 ≈ 2.9998•10−5. When we convert 99,000.7 to this format, the closest number is 99,001, so the absolute error is .3, the ULP error is .3, and the relative error is .3 /99,000.7 ≈ 3.03•10−6. So the ULP error is the same, but the relative error is nearly ten times less. Conversely, a relative error of about 3•10−5 is .3 ULP just above 10,000 but 3 ULP just below 100,000.

When we convert 100,007 to this format, the closest representable number is 100,010, so the absolute error is 3, the ULP error is .3, and the relative error is back to 2.9998•10−5. This is what Goldberg means by the relative error wobbling relative to the ULP error. Within a fixed exponent interval, the ULP is a fixed amount. Over a large exponent range, the ULP error approximates the relative error; it changes with the same average that the relative error does, but it does so in jumps, whereas the relative error is continuous.

like image 103
Eric Postpischil Avatar answered Mar 03 '26 23:03

Eric Postpischil



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!