I am curious to know what algorithm R's mean function uses. Is there some reference to the numerical properties of this algorithm?
I found the following C code in summary.c:do_summary():
case REALSXP:
PROTECT(ans = allocVector(REALSXP, 1));
for (i = 0; i < n; i++) s += REAL(x)[i];
s /= n;
if(R_FINITE((double)s)) {
for (i = 0; i < n; i++) t += (REAL(x)[i] - s);
s += t/n;
}
REAL(ans)[0] = s;
break;
It seems to do a straight up mean:
for (i = 0; i < n; i++) s += REAL(x)[i];
s /= n;
Then it adds what i assume is a numerical correction which seems to be the mean difference from the mean of the data:
for (i = 0; i < n; i++) t += (REAL(x)[i] - s);
s += t/n;
I haven't been able to track this algorithm down anywhere (mean is not a great search term).
Any help would be much appreciated.
To calculate the average in R, use the mean() function. The average is calculated by taking a sum of the input values and dividing by the number of values in the input data. The Mean is the sum of its data values divided by the count of the data.
mean() function in R Language is used to calculate the arithmetic mean of the elements of the numeric vector passed to it as argument. Syntax: mean(x, na.rm) Parameters: x: Numeric Vector.
Calculating an average and standard deviation in R is straightforward. The mean() function calculates the average and the sd() function calculates the standard deviation.
I'm not sure what algorithm this is, but Martin Maechler mentioned the updating method of West, 1979 in response to PR#1228, which was implemented by Brian Ripley in R-2.3.0. I couldn't find a reference in the source code or version control logs that listed the actual algorithm used. It was implemented in cov.c
in revision 37389 and in summary.c
in revision 37393.
I believe the R algorithm works as follows.
The first standard calculation of the mean is effectively an estimate of the algebraic mean, due to floating point errors (which gets worse the further the sum gets away from the elements being accumulated).
The second pass sums the differences of the elements from the estimated mean. There should be no net difference as the values on either side of the mean should balance out, but we have floating point error. The differences from the mean still have the potential for error, but these should be smaller than the worst potential difference between a element and the accumulating sum (at least the estimated mean lives somewhere within the range of values, while the summation may escape it). Dividing by N gives you the average difference from the mean, which you then use to nudge your initial estimate closer to the true mean. You could repeat this to get closer and closer, but at some point the floating point error in calculating the average difference from the mean will defeat you. I guess one pass is close enough.
This was explained to me by my wife.
I'm not sure where what the source of the algorithm is, and I'm not sure how this compares to other methods, such as Kahan summation. I guess I'll have to do some testing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With