Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance implications of long double. Why does C choose 64-bits instead of the hardware's 80-bit for its default?

For specifics I am talking about x87 PC architecture and the C compiler.

I am writing my own interpreter and the reasoning behind the double datatype confuses me. Especially where efficiency is concerned. Could someone explain WHY C has decided on a 64-bit double and not the hardware native 80-bit double? And why has the hardware settled on an 80-bit double, since that is not aligned? What are the performance implications of each? I would like to use an 80-bit double for my default numeric type. But the choices of the compiler developers make me concerned that this is not the best choice.

  1. double on x86 is only 2 bytes shorter, why doesn't the compiler use the 10 byte long double by default?
  2. Can I get an example of the extra precision gotten by 80-bit long double vs double?
  3. Why does Microsoft disable long double by default?
  4. In terms of magnitude, how much worse / slower is long double on typical x86/x64 PC hardware?
like image 659
unixman83 Avatar asked Apr 21 '12 03:04

unixman83


People also ask

How many bits is a long double in C?

The long double type is guaranteed to have more bits than a double, but the exact number my vary from one hardware platform to another. The most typical implementations are either 80 or 128 bits. The IEEE standard for quadruple precision floating point numbers is 128 bits consisting of: one sign bit.

How many bits does a long double have?

With the GNU C Compiler, long double is 80-bit extended precision on x86 processors regardless of the physical storage used for the type (which can be either 96 or 128 bits), On some other architectures, long double can be double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on SPARC).

What is the difference between double and long double in C?

The difference is the size. They may be the same, or a long double may be larger. Larger meaning that it can hold greater (and smaller) values and with higher precision. The difference is that any type with long is more precise and has a greater range then the type itself without long because it uses more bytes.

What's the difference between float double and long double?

They have different sizes and precision. Basically all of them represent the decimal values such as 3.14 The main difference between them is that in float we can store values upto 4 bytes (6 places after decimal point) Double upto 8 bytes And long double even more than float and double.


2 Answers

The answer, according to Mysticial, is that Microsoft uses SSE2 for its double data-type. The Floating point unit (FPU) x87 is seen as outdated and slow in comparison to modern CPU extensions. SSE2 does not support 80-bit, hence the compiler's choice of 64-bit precision.

On 32-bit x86 architecture, since all CPUs don't have SSE2 yet, Microsoft still uses the floating point unit (FPU) x87 unless the compiler switch /arch:SSE2 is given. Which makes the code incompatible with those older? CPUs.

like image 79
unixman83 Avatar answered Oct 01 '22 03:10

unixman83


Wrong question. It has nothing to do with C, all languages use AFAIK as standard floating-point single precision with 32 bit and double precision with 64 bit. C as a language supporting different hardware defines only

sizeof(float) <= sizeof(double) <= sizeof(long double)

so it is perfectly acceptable that a specific C compiler uses 32bit floats for all datatypes.

Intel decided on Kahans advise that they support as much precision as possible and that calculations in less precise formats (32 & 64 bit) should be performed internally with 80bit precision.

The difference in precision and exponent range: 64bit has approx. 16 decimal digits and a max exponent of 308, 80bit has 19 digits and a max exponent of 4932.

Being much more precise and having a far greater exponent range you can calculate intermediate results without overflow or underflow and your result has less rounding errors.

So the question is why long double does not support 80bit. In fact many compilers did support it, but a lack of use and the run for benchmark performance killed it effectively.

like image 25
Thorsten S. Avatar answered Oct 01 '22 05:10

Thorsten S.