Does anyone know how to find out the precision of long double
on a specific platform? I appear to be losing precision after 17 decimal digits, which is the same as when I just use double
. I would expect to get more, since double
is represented with 8 bytes on my platform, while long double
is 12 bytes.
Before you ask, this is for Project Euler, so yes I do need more than 17 digits. :)
EDIT: Thanks for the quick replies. I just confirmed that I can only get 18 decimal digits by using long double
on my system.
Long double has a minimum precision of 15, 18, or 33 significant digits depending on how many bytes it occupies.
The DOUBLE PRECISION data type provides 8-byte storage for numbers using IEEE floating-point notation.
sizeof(long double) is 16 (aka 128 bits) in Intel Macs for alignment purposes but is actually 80 bit precision according to their documentation. In Apple Silicon, long doubles are just doubles.
long long is an integer and long double is floating point. long double must be at least as large as double but need not be larger; it is up to the implementation. long float does not exist in standard C.
You can find out with std::numeric_limits
:
#include <iostream> // std::cout #include <limits> // std::numeric_limits int main(){ std::cout << std::numeric_limits<long double>::digits10 << std::endl; }
You can use <cfloat>. Specifically:
LDBL_DIG
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With