I have learned from wikipedia that a double number has at most 15-17 significant decimal digits
However, for the simple C++ program below
double x = std::pow(10,-16);
std::cout<<"x="<<std::setprecision(100000)<<x<<std::endl;
(To test it, use this online shell ), I get
x=9.999999999999999790977867240346035618411149408467364363417573258630000054836273193359375e-17
which has 88 significant decimal digits, which, apparently, contradicts with the aforementioned claim from Wiki. Can anyone clarify should I misunderstand something? Thanks.
There is no contradiction. As you can see, the value of x is incorrect at the first 7 in its decimal expansion; I count 16 correct digits before that. std::setprecision doesn't control the precision of the inputs to std::cout, it simply displays as many digits as you request. Perhaps std::setprecision is poorly named, and should be replaced by std::displayprecision, but std::setprecision is doing its job. From a linguistic perspective, think of std::setprecision as setting the precision of std::cout, and not attempting to control the precision of the arguments to std::cout.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With