It is nearly impossible(*) to provide strict IEEE 754 semantics at reasonable cost when the only floating-point instructions one is allowed to used are the 387 ones. It is particularly hard when one wishes to keep the FPU working on the full 64-bit significand so that the long double
type is available for extended precision. The usual “solution” is to do intermediate computations at the only available precision, and to convert to the lower precision at more or less well-defined occasions.
Recent versions of GCC handle excess precision in intermediate computations according to the interpretation laid out by Joseph S. Myers in a 2008 post to the GCC mailing list. This description makes a program compiled with gcc -std=c99 -mno-sse2 -mfpmath=387
completely predictable, to the last bit, as far as I understand. And if by chance it doesn't, it is a bug and it will be fixed: Joseph S. Myers' stated intention in his post is to make it predictable.
Is it documented how Clang handles excess precision (say when the option -mno-sse2
is used), and where?
(*) EDIT: this is an exaggeration. It is slightly annoying but not that difficult to emulate binary64 when one is allowed to configure the x87 FPU to use a 53-bit significand.
Following a comment by R.. below, here is the log of a short interaction of mine with the most recent version of Clang I have :
Hexa:~ $ clang -v Apple clang version 4.1 (tags/Apple/clang-421.11.66) (based on LLVM 3.1svn) Target: x86_64-apple-darwin12.4.0 Thread model: posix Hexa:~ $ cat fem.c #include <stdio.h> #include <math.h> #include <float.h> #include <fenv.h> double x; double y = 2.0; double z = 1.0; int main(){ x = y + z; printf("%d\n", (int) FLT_EVAL_METHOD); } Hexa:~ $ clang -std=c99 -mno-sse2 fem.c Hexa:~ $ ./a.out 0 Hexa:~ $ clang -std=c99 -mno-sse2 -S fem.c Hexa:~ $ cat fem.s … movl $0, %esi fldl _y(%rip) fldl _z(%rip) faddp %st(1) movq _x@GOTPCREL(%rip), %rax fstpl (%rax) …
(GNU C is a language, GCC is a compiler for that language.Clang defines __GNUC__ / __GNUC_MINOR__ / __GNUC_PATCHLEVEL__ according to the version of gcc that it claims full compatibility with.
Clang Design: Like many other compilers design, Clang compiler has three phase: The front end that parses source code, checking it for errors, and builds a language-specific Abstract Syntax Tree (AST) to represent the input code. The optimizer: its goal is to do some optimization on the AST generated by the front end.
Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.
● The driver provides convenience and compatibility ● clang-cl is a cl.exe compatible driver mode for clang ● It understands the environment, the flags, and the tools ● Integrates with Visual Studio ● /fallback allows bring-up of large projects.
This does not answer the originally posed question, but if you are a programmer working with similar issues, this answer might help you.
I really don't see where the perceived difficulty is. Providing strict IEEE-754 binary64 semantics while being limited to 80387 floating-point math, and retaining 80-bit long double computation, seems to follow well-specified C99 casting rules with both GCC-4.6.3 and clang-3.0 (based on LLVM 3.0).
Edited to add: Yet, Pascal Cuoq is correct: neither gcc-4.6.3 or clang-llvm-3.0 actually enforce those rules correctly for '387 floating-point math. Given the proper compiler options, the rules are correctly applied to expressions evaluated at compile time, but not for run-time expressions. There are workarounds, listed after the break below.
I do molecular dynamics simulation code, and am very familiar with the repeatability/predictability requirements and also with the desire to retain maximum precision available when possible, so I do claim I know what I am talking about here. This answer should show that the tools exist and are simple to use; the problems arise from not being aware of or not using those tools.
(A preferred example I like, is the Kahan summation algorithm. With C99 and proper casting (adding casts to e.g. Wikipedia example code), no tricks or extra temporary variables are needed at all. The implementation works regardless of compiler optimization level, including at -O3
and -Ofast
.)
C99 explicitly states (in e.g. 5.4.2.2) that casting and assignment both remove all extra range and precision. This means that you can use long double
arithmetic by defining your temporary variables used during computation as long double
, also casting your input variables to that type; whenever a IEEE-754 binary64 is needed, just cast to a double
.
On '387, the cast generates an assignment and a load on both the above compilers; this does correctly round the 80-bit value to IEEE-754 binary64. This cost is very reasonable in my opinion. The exact time taken depends on the architecture and surrounding code; usually it is and can be interleaved with other code to bring the cost down to neglible levels. When MMX, SSE or AVX are available, their registers are separate from the 80-bit 80387 registers, and the cast usually is done by moving the value to the MMX/SSE/AVX register.
(I prefer production code to use a specific floating-point type, say tempdouble
or such, for temporary variables, so that it can be defined to either double
or long double
depending on architecture and speed/precision tradeoffs desired.)
In a nutshell:
Don't assume
(expression)
is ofdouble
precision just because all the variables and literal constants are. Write it as(double)(expression)
if you want the result atdouble
precision.
This applies to compound expressions, too, and may sometimes lead to unwieldy expressions with many levels of casts.
If you have expr1
and expr2
that you wish to compute at 80-bit precision, but also need the product of each rounded to 64-bit first, use
long double expr1; long double expr2; double product = (double)(expr1) * (double)(expr2);
Note, product
is computed as a product of two 64-bit values; not computed at 80-bit precision, then rounded down. Calculating the product at 80-bit precision, then rounding down, would be
double other = expr1 * expr2;
or, adding descriptive casts that tell you exactly what is happening,
double other = (double)((long double)(expr1) * (long double)(expr2));
It should be obvious that product
and other
often differ.
The C99 casting rules are just another tool you must learn to wield, if you do work with mixed 32-bit/64-bit/80-bit/128-bit floating point values. Really, you encounter the exact same issues if you mix binary32 and binary64 floats (float
and double
on most architectures)!
Perhaps rewriting Pascal Cuoq's exploration code, to correctly apply casting rules, makes this clearer?
#include <stdio.h> #define TEST(eq) printf("%-56s%s\n", "" # eq ":", (eq) ? "true" : "false") int main(void) { double d = 1.0 / 10.0; long double ld = 1.0L / 10.0L; printf("sizeof (double) = %d\n", (int)sizeof (double)); printf("sizeof (long double) == %d\n", (int)sizeof (long double)); printf("\nExpect true:\n"); TEST(d == (double)(0.1)); TEST(ld == (long double)(0.1L)); TEST(d == (double)(1.0 / 10.0)); TEST(ld == (long double)(1.0L / 10.0L)); TEST(d == (double)(ld)); TEST((double)(1.0L/10.0L) == (double)(0.1)); TEST((long double)(1.0L/10.0L) == (long double)(0.1L)); printf("\nExpect false:\n"); TEST(d == ld); TEST((long double)(d) == ld); TEST(d == 0.1L); TEST(ld == 0.1); TEST(d == (long double)(1.0L / 10.0L)); TEST(ld == (double)(1.0L / 10.0)); return 0; }
The output, with both GCC and clang, is
sizeof (double) = 8 sizeof (long double) == 12 Expect true: d == (double)(0.1): true ld == (long double)(0.1L): true d == (double)(1.0 / 10.0): true ld == (long double)(1.0L / 10.0L): true d == (double)(ld): true (double)(1.0L/10.0L) == (double)(0.1): true (long double)(1.0L/10.0L) == (long double)(0.1L): true Expect false: d == ld: false (long double)(d) == ld: false d == 0.1L: false ld == 0.1: false d == (long double)(1.0L / 10.0L): false ld == (double)(1.0L / 10.0): false
except that recent versions of GCC promote the right hand side of ld == 0.1
to long double first (i.e. to ld == 0.1L
), yielding true
, and that with SSE/AVX, long double
is 128-bit.
For the pure '387 tests, I used
gcc -W -Wall -m32 -mfpmath=387 -mno-sse ... test.c -o test clang -W -Wall -m32 -mfpmath=387 -mno-sse ... test.c -o test
with various optimization flag combinations as ...
, including -fomit-frame-pointer
, -O0
, -O1
, -O2
, -O3
, and -Os
.
Using any other flags or C99 compilers should lead to the same results, except for long double
size (and ld == 1.0
for current GCC versions). If you encounter any differences, I'd be very grateful to hear about them; I may need to warn my users of such compilers (compiler versions). Note that Microsoft does not support C99, so they are completely uninteresting to me.
Pascal Cuoq does bring up an interesting problem in the comment chain below, which I didn't immediately recognize.
When evaluating an expression, both GCC and clang with -mfpmath=387
specify that all expressions are evaluated using 80-bit precision. This leads to for example
7491907632491941888 = 0x1.9fe2693112e14p+62 = 110011111111000100110100100110001000100101110000101000000000000 5698883734965350400 = 0x1.3c5a02407b71cp+62 = 100111100010110100000001001000000011110110111000111000000000000 7491907632491941888 * 5698883734965350400 = 42695510550671093541385598890357555200 = 100000000111101101101100110001101000010100100001011110111111111111110011000111000001011101010101100011000000000000000000000000
yielding incorrect results, because that string of ones in the middle of the binary result is just at the difference between 53- and 64-bit mantissas (64 and 80-bit floating point numbers, respectively). So, while the expected result is
42695510550671088819251326462451515392 = 0x1.00f6d98d0a42fp+125 = 100000000111101101101100110001101000010100100001011110000000000000000000000000000000000000000000000000000000000000000000000000
the result obtained with just -std=c99 -m32 -mno-sse -mfpmath=387
is
42695510550671098263984292201741942784 = 0x1.00f6d98d0a43p+125 = 100000000111101101101100110001101000010100100001100000000000000000000000000000000000000000000000000000000000000000000000000000
In theory, you should be able to tell gcc and clang to enforce the correct C99 rounding rules by using options
-std=c99 -m32 -mno-sse -mfpmath=387 -ffloat-store -fexcess-precision=standard
However, this only affects expressions the compiler optimizes, and does not seem to fix the 387 handling at all. If you use e.g. clang -O1 -std=c99 -m32 -mno-sse -mfpmath=387 -ffloat-store -fexcess-precision=standard test.c -o test && ./test
with test.c
being Pascal Cuoq's example program, you will get the correct result per IEEE-754 rules -- but only because the compiler optimizes away the expression, not using the 387 at all.
Simply put, instead of computing
(double)d1 * (double)d2
both gcc and clang actually tell the '387 to compute
(double)((long double)d1 * (long double)d2)
This is indeed I believe this is a compiler bug affecting both gcc-4.6.3 and clang-llvm-3.0, and an easily reproduced one. (Pascal Cuoq points out that FLT_EVAL_METHOD=2
means operations on double-precision arguments is always done at extended precision, but I cannot see any sane reason -- aside from having to rewrite parts of libm
on '387 -- to do that in C99 and considering IEEE-754 rules are achievable by the hardware! After all, the correct operation is easily achievable by the compiler, by modifying the '387 control word to match the precision of the expression. And, given the compiler options that should force this behaviour -- -std=c99 -ffloat-store -fexcess-precision=standard
-- make no sense if FLT_EVAL_METHOD=2
behaviour is actually desired, there is no backwards compatibility issues, either.) It is important to note that given the proper compiler flags, expressions evaluated at compile time do get evaluated correctly, and that only expressions evaluated at run time get incorrect results.
The simplest workaround, and the portable one, is to use fesetround(FE_TOWARDZERO)
(from fenv.h
) to round all results towards zero.
In some cases, rounding towards zero may help with predictability and pathological cases. In particular, for intervals like x = [0,1)
, rounding towards zero means the upper limit is never reached through rounding; important if you evaluate e.g. piecewise splines.
For the other rounding modes, you need to control the 387 hardware directly.
You can use either __FPU_SETCW()
from #include <fpu_control.h>
, or open-code it. For example, precision.c
:
#include <stdlib.h> #include <stdio.h> #include <limits.h> #define FP387_NEAREST 0x0000 #define FP387_ZERO 0x0C00 #define FP387_UP 0x0800 #define FP387_DOWN 0x0400 #define FP387_SINGLE 0x0000 #define FP387_DOUBLE 0x0200 #define FP387_EXTENDED 0x0300 static inline void fp387(const unsigned short control) { unsigned short cw = (control & 0x0F00) | 0x007f; __asm__ volatile ("fldcw %0" : : "m" (*&cw)); } const char *bits(const double value) { const unsigned char *const data = (const unsigned char *)&value; static char buffer[CHAR_BIT * sizeof value + 1]; char *p = buffer; size_t i = CHAR_BIT * sizeof value; while (i-->0) *(p++) = '0' + !!(data[i / CHAR_BIT] & (1U << (i % CHAR_BIT))); *p = '\0'; return (const char *)buffer; } int main(int argc, char *argv[]) { double d1, d2; char dummy; if (argc != 3) { fprintf(stderr, "\nUsage: %s 7491907632491941888 5698883734965350400\n\n", argv[0]); return EXIT_FAILURE; } if (sscanf(argv[1], " %lf %c", &d1, &dummy) != 1) { fprintf(stderr, "%s: Not a number.\n", argv[1]); return EXIT_FAILURE; } if (sscanf(argv[2], " %lf %c", &d2, &dummy) != 1) { fprintf(stderr, "%s: Not a number.\n", argv[2]); return EXIT_FAILURE; } printf("%s:\td1 = %.0f\n\t %s in binary\n", argv[1], d1, bits(d1)); printf("%s:\td2 = %.0f\n\t %s in binary\n", argv[2], d2, bits(d2)); printf("\nDefaults:\n"); printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2)); printf("\nExtended precision, rounding to nearest integer:\n"); fp387(FP387_EXTENDED | FP387_NEAREST); printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2)); printf("\nDouble precision, rounding to nearest integer:\n"); fp387(FP387_DOUBLE | FP387_NEAREST); printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2)); printf("\nExtended precision, rounding to zero:\n"); fp387(FP387_EXTENDED | FP387_ZERO); printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2)); printf("\nDouble precision, rounding to zero:\n"); fp387(FP387_DOUBLE | FP387_ZERO); printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2)); return 0; }
Using clang-llvm-3.0 to compile and run, I get the correct results,
clang -std=c99 -m32 -mno-sse -mfpmath=387 -O3 -W -Wall precision.c -o precision ./precision 7491907632491941888 5698883734965350400 7491907632491941888: d1 = 7491907632491941888 0100001111011001111111100010011010010011000100010010111000010100 in binary 5698883734965350400: d2 = 5698883734965350400 0100001111010011110001011010000000100100000001111011011100011100 in binary Defaults: Product = 42695510550671098263984292201741942784 0100011111000000000011110110110110011000110100001010010000110000 in binary Extended precision, rounding to nearest integer: Product = 42695510550671098263984292201741942784 0100011111000000000011110110110110011000110100001010010000110000 in binary Double precision, rounding to nearest integer: Product = 42695510550671088819251326462451515392 0100011111000000000011110110110110011000110100001010010000101111 in binary Extended precision, rounding to zero: Product = 42695510550671088819251326462451515392 0100011111000000000011110110110110011000110100001010010000101111 in binary Double precision, rounding to zero: Product = 42695510550671088819251326462451515392 0100011111000000000011110110110110011000110100001010010000101111 in binary
In other words, you can work around the compiler issues by using fp387()
to set the precision and rounding mode.
The downside is that some math libraries (libm.a
, libm.so
) may be written with the assumption that intermediate results are always computed at 80-bit precision. At least the GNU C library fpu_control.h
on x86_64 has the comment "libm requires extended precision". Fortunately, you can take the '387 implementations from e.g. GNU C library, and implement them in a header file or write a known-to-work libm
, if you need the math.h
functionality; in fact, I think I might be able to help there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With