Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What precision are floating-point arithmetic operations done in?

Consider two very simple multiplications below:

double result1;
long double result2;
float var1=3.1;
float var2=6.789;
double var3=87.45;
double var4=234.987;

result1=var1*var2;
result2=var3*var4;

Are multiplications by default done in a higher precision than the operands? I mean in case of first multiplication is it done in double precision and in case of second one in x86 architecture is it done in 80-bit extended-precision or we should cast operands in expressions to the higher precision ourselves like below?

result1=(double)var1*(double)var2;
result2=(long double)var3*(long double)var4;

What about other operations(add, division and remainder)? For example when adding more than two positive single-precision values, using extra significant bits of double-precision can decrease round-off errors if used to hold intermediate results of expression.

like image 888
Pooria Avatar asked Aug 14 '14 07:08

Pooria


3 Answers

Precision of floating-point computations

C++11 incorporates the definition of FLT_EVAL_METHOD from C99 in cfloat.

FLT_EVAL_METHOD     

Possible values:
-1 undetermined
 0 evaluate just to the range and precision of the type
 1 evaluate float and double as double, and long double as long double.
 2 evaluate all as long double 

If your compiler defines FLT_EVAL_METHOD as 2, then the computations of r1 and r2, and of s1 and s2 below are respectively equivalent:

double var3 = …;
double var4 = …;

double r1 = var3 * var4;
double r2 = (long double)var3 * (long double)var4;

long double s1 = var3 * var4;
long double s2 = (long double)var3 * (long double)var4;

If your compiler defines FLT_EVAL_METHOD as 2, then in all four computations above, the multiplication is done at the precision of the long double type.

However, if the compiler defines FLT_EVAL_METHOD as 0 or 1, r1 and r2, and respectively s1 and s2, aren't always the same. The multiplications when computing r1 and s1 are done at the precision of double. The multiplications when computing r2 and s2 are done at the precision of long double.

Getting wide results from narrow arguments

If you are computing results that are destined to be stored in a wider result type than the type of the operands, as are result1 and result2 in your question, you should always convert the arguments to a type at least as wide as the target, as you do here:

result2=(long double)var3*(long double)var4;

Without this conversion (if you write var3 * var4), if the compiler's definition of FLT_EVAL_METHOD is 0 or 1, the product will be computed in the precision of double, which is a shame, since it is destined to be stored in a long double.

If the compiler defines FLT_EVAL_METHOD as 2, then the conversions in (long double)var3*(long double)var4 are not necessary, but they do not hurt either: the expression means exactly the same thing with and without them.

Digression: if the destination format is as narrow as the arguments, when is extended-precision for intermediate results better?

Paradoxically, for a single operation, rounding only once to the target precision is best. The only effect of computing a single multiplication in extended precision is that the result will be rounded to extended precision and then to double precision. This makes it less accurate. In other words, with FLT_EVAL_METHOD 0 or 1, the result r2 above is sometimes less accurate than r1 because of double-rounding, and if the compiler uses IEEE 754 floating-point, never better.

The situation is different for larger expressions that contain several operations. For these, it is usually better to compute intermediate results in extended precision, either through explicit conversions or because the compiler uses FLT_EVAL_METHOD == 2. This question and its accepted answer show that when computing with 80-bit extended precision intermediate computations for binary64 IEEE 754 arguments and results, the interpolation formula u2 * (1.0 - u1) + u1 * u3 always yields a result between u2 and u3 for u1 between 0 and 1. This property may not hold for binary64-precision intermediate computations because of the larger rounding errors then.

like image 129
Pascal Cuoq Avatar answered Nov 12 '22 00:11

Pascal Cuoq


The usual arthimetic conversions for floating point types are applied before multiplication, division, and modulus:

The usual arithmetic conversions are performed on the operands and determine the type of the result.

§5.6 [expr.mul]

Similarly for addition and subtraction:

The usual arithmetic conversions are performed for operands of arithmetic or enumeration type.

§5.7 [expr.add]

The usual arithmetic conversions for floating point types are laid out in the standard as follows:

Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

[...]

— If either operand is of type long double, the other shall be converted to long double.

— Otherwise, if either operand is double, the other shall be converted to double.

— Otherwise, if either operand is float, the other shall be converted to float.

§5 [expr]

The actual form/precision of these floating point types is implementation-defined:

The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double. The value representation of floating-point types is implementation-defined.

§3.9.1 [basic.fundamental]

like image 40
Robert Allan Hennigan Leahy Avatar answered Nov 11 '22 23:11

Robert Allan Hennigan Leahy


  1. For floating point multiplication: FP multipliers use internally double the width of the operands to generate an intermediate result, which equals the real result within an infinite precision, and then round it to the target precision. Thus you should not worry about multiplication. The result is correctly rounded.
  2. For floating point addition, the result is also correctly rounded as standard FP adders use extra sufficient 3 guard bits to compute a correctly rounded result.
  3. For division, remainder and other complicated functions, like transcendentals such as sin, log, exp, etc... it depends mainly on the architecture and the used libraries. I recommend you to use the MPFR library if you seek correctly rounded results for division or any other complicated function.
like image 45
Garp Avatar answered Nov 12 '22 01:11

Garp