Supposing I want to divide one number into many.
a /= x;
b /= x;
c /= x;
...
Since multiplication is faster, the temptation is to do this
tmp = 1.0f / x;
a *= tmp;
b *= tmp;
c *= tmp;
...
1) Is this guaranteed to produce identical answers? I suspect not but some confirmation would be nice.
2) If x
is extremely large or extremely small I expect this could cause a significant loss of accuracy. Is there a formula which will tell me how much accuracy I will sacrifice?
3) Perhaps there's no convenient formula, but can we at least state a rule of thumb for when numerical instabilities will be an issue? Is it to do with the magnitudes of the operands, or the difference between the magnitudes of the operands perhaps?
In numerical linear algebra the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues.
In this case, Eq. 5 shows that the instability can be prevented by keeping δt small enough to satisfy the inequality. As a general rule when a numerical instability occurs and exhibits the character of increasing plus and minus values on successive time steps it can be cured by reducing the time-step size.
The denominator in an incidence rate is the sum of each individual's time at risk (i.e. the length of time they were followed up in the study). It is commonly expressed as person years at risk. The incidence rate is the rate of contracting the disease among those still at risk.
The population at risk (denominator) should include all persons 'at risk of developing the outcome under investigation'. Therefore, individuals who currently have the disease under study or who are immune (e.g. due to immunisation), should be excluded from the denominator. However, this is not always possible in practice. 2
1) No, not guaranteed to produce identical answers. Even with IEEE, subtle rounding effects may result in a different of a 1 or 2 ULP by using a/x
or a*(1/x)
.
2) If x
is extremely small (that is a bit smaller than DBL_MIN
(minimum normalized positive floating-point number) as in the case of sub-normals), 1/x
is INF
with total loss of precision. Potentially significant loss of precision occurs also with x
very large like when the FP model does not support sub-normals.
By testing |x|
against the largest finite number <= 1/DBL_MIN
and the smallest non-zero >= 1/DBL_MAX
, code can determine when significant loss of accuracy begins. A formula would likely depend on the FP model used and the exponent of the x
as well as the model's limits. In this range of binary64, the difference in the binary exponent of x
and Emin
(or Emax
) would be the first order estmiate of bits lost.
3) Significant numerical instabilities occur in the ranges discussed above.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With