For specifics I am talking about x87 PC architecture and the C compiler.
I am writing my own interpreter and the reasoning behind the double
datatype confuses me. Especially where efficiency is concerned. Could someone explain WHY C has decided on a 64-bit double
and not the hardware native 80-bit double
? And why has the hardware settled on an 80-bit double
, since that is not aligned? What are the performance implications of each? I would like to use an 80-bit double
for my default numeric type. But the choices of the compiler developers make me concerned that this is not the best choice.
double
on x86 is only 2 bytes shorter, why doesn't the compiler use the 10 byte long double
by default?long double
vs double
?long double
by default?long double
on typical x86/x64 PC hardware?The long double type is guaranteed to have more bits than a double, but the exact number my vary from one hardware platform to another. The most typical implementations are either 80 or 128 bits. The IEEE standard for quadruple precision floating point numbers is 128 bits consisting of: one sign bit.
With the GNU C Compiler, long double is 80-bit extended precision on x86 processors regardless of the physical storage used for the type (which can be either 96 or 128 bits), On some other architectures, long double can be double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on SPARC).
The difference is the size. They may be the same, or a long double may be larger. Larger meaning that it can hold greater (and smaller) values and with higher precision. The difference is that any type with long is more precise and has a greater range then the type itself without long because it uses more bytes.
They have different sizes and precision. Basically all of them represent the decimal values such as 3.14 The main difference between them is that in float we can store values upto 4 bytes (6 places after decimal point) Double upto 8 bytes And long double even more than float and double.
The answer, according to Mysticial, is that Microsoft uses SSE2 for its double
data-type. The Floating point unit (FPU) x87 is seen as outdated and slow in comparison to modern CPU extensions. SSE2 does not support 80-bit, hence the compiler's choice of 64-bit precision.
On 32-bit x86 architecture, since all CPUs don't have SSE2 yet, Microsoft still uses the floating point unit (FPU) x87 unless the compiler switch /arch:SSE2
is given. Which makes the code incompatible with those older? CPUs.
Wrong question. It has nothing to do with C, all languages use AFAIK as standard floating-point single precision with 32 bit and double precision with 64 bit. C as a language supporting different hardware defines only
sizeof(float) <= sizeof(double) <= sizeof(long double)
so it is perfectly acceptable that a specific C compiler uses 32bit floats for all datatypes.
Intel decided on Kahans advise that they support as much precision as possible and that calculations in less precise formats (32 & 64 bit) should be performed internally with 80bit precision.
The difference in precision and exponent range: 64bit has approx. 16 decimal digits and a max exponent of 308, 80bit has 19 digits and a max exponent of 4932.
Being much more precise and having a far greater exponent range you can calculate intermediate results without overflow or underflow and your result has less rounding errors.
So the question is why long double does not support 80bit. In fact many compilers did support it, but a lack of use and the run for benchmark performance killed it effectively.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With