I'm hoping you can help with a bit of a head scratcher.
I have written out a template class to calculate standard deviations:
template <class T>
double GetStandardDeviation(T* valueArray, int populationSize)
{
double average;
T cumulativeValue = 0;
double cumulativeSquaredDeviation = 0;
// calculate average
for (int i = 0; i < populationSize; i++)
{
cumulativeValue += valueArray[i];
}
average = (double)cumulativeValue / (double)populationSize;
// calculate S.D.
for (int i = 0; i < populationSize; i++)
{
double difference = average - (double)valueArray[i];
double squaredDifference = difference * difference;
cumulativeSquaredDeviation += squaredDifference;
}
return cumulativeSquaredDeviation / (double)populationSize;
}
And this seems to be doing everything correctly, except it returns the result to only 5 decimal places. Can anyone suggest a reason for this? I'm stumped!
An IEEE-754 double precision value has about 15 decimal digits of precision so it will be limited to five decimal places only if your values are up around the tens of billions.
What you're most likely seeing is simply the default output format for doubles which, like C, will tend to give you a limited number of fractional digits.
You can see this in the following code:
#include <iostream>
#include <iomanip>
int main(void) {
double d = 1.23456789012345;
std::cout << d << '\n';
std::cout << std::setprecision(16) << d << '\n';
return 0;
}
The output of which is:
1.23457
1.23456789012345
Table 89 in the C++03 standard (in 27.4.4.1 basic_ios constructors
) shows the post-conditions after calling basic_ios::init()
and it shows the default precision to be 6. In C++11, it says the same thing, only in table 128 under 27.5.5.2 basic_ios constructors
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With