Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to calculate a pooled standard deviation in R?

I want to calculate the pooled (actually weighted) standard deviation for all the unique sites in my data frame.

The values for these sites are values for single species forest stands and I want to pool the mean and the sd so that I can compare broadleaved stands with conifer stands.
This is the data frame (df) with values for the broadleaved stands:

keybl           n   mean    sd
Vest02DenmDesp  3   58.16   6.16
Vest02DenmDesp  5   54.45   7.85
Vest02DenmDesp  3   51.34   1.71
Vest02DenmDesp  3   59.57   5.11
Vest02DenmDesp  5   62.89   10.26
Vest02DenmDesp  3   77.33   2.14
Mato10GermDesp  4   41.89   12.6
Mato10GermDesp  4   11.92   1.8
Wawa07ChinDesp  18  0.097   0.004
Chen12ChinDesp  3   41.93   1.12
Hans11SwedDesp  2   1406.2  679.46
Hans11SwedDesp  2   1156.2  464.07
Hans11SwedDesp  2   4945.3  364.58

Keybl is the code for the site. The formula for the pooled SD is:

s=sqrt((n1-1)*s1^2+(n2-1)*s2^2)/(n1+n2-2))

(Sorry I can't post pictures and did not find a link that would directly go to the formula)

Where 2 is the number of groups and therefore will change depending on site. I know this is used for t-test and two groups one wants to compare. In this case I'm not planning to compare these groups. My professor suggested me to use this formula to get a weighted sd. I didn't find a R function that incorporates this formula in the way I need it, therefore I tried to build my own. I am, however, new to R and not very good at making functions and loops, therefore I hope for your help.

This is what I got so far:

sd=function (data) {
nc1=data[z,"nc"]
sc1=data[z, "sc"]
nc2=data[z+1, "nc"]
sc2=data[z+1, "sc"]
sd1=(nc1-1)*sc1^2 + (nc2-1)*sc2^2
sd2=sd1/(nc1+nc2-length(nc1))
sqrt(sd2)
}

splitdf=split(df, with(df, df$keybl), drop = TRUE)

for (c in 1:length(splitdf)) {
for (i in 1:length(splitdf[[i]])) {
    a = (splitdf[[i]])
    b =sd(a)
    }
}

1) The function itself is not correct as it gives slightly lower values than it should and I don't understand why. Could it be that it does not stop when z+1 has reached the last row? If so, how can that be corrected?

2) The loop is totally wrong but it is what I could come up with after several hours of no success.

Can anybody help me?

Thanks,

Antra

like image 374
Antra Avatar asked Jun 07 '13 00:06

Antra


2 Answers

What you're trying to do would benefit from a more general formula which will make it easier. If you didn't need to break it into pieces by the keybl variable you'd be done.

dd <- df #df is not a good name for a data.frame variable since df has a meaning in statistics

dd$df <- dd$n-1
pooledSD <- sqrt( sum(dd$sd^2 * dd$df) / sum(dd$df) )
# note, in this case I only pre-calculated df because I'll need it more than once. The sum of squares, variance, etc. are only used once.

An important general principle in R is that you use vector math as much as possible. In this trivial case it won't matter much but in order to see how to do this on large data.frame objects where compute speed is more important read on.

# First use R's vector facilities to define the variables you need for pooling.
dd$df <- dd$n-1
dd$s2 <- dd$sd^2 # sd isn't a good name for standard deviation variable even in a data.frame just because it's a bad habit to have... it's already a function and standard deviations have a standard name
dd$ss <- dd$s2 * dd$df

And now just use convenience functions for splitting and calculating the necessary sums. Note only one function is executed here in each implicit loop (*apply, aggregate, etc. are all implicit loops executing functions many times).

ds <- aggregate(ss ~ keybl, data = dd, sum)
ds$df <- tapply(dd$df, dd$keybl, sum) #two different built in methods for split apply, we could use aggregate for both if we wanted
# divide your ss by your df and voila
ds$s2 <- ds$ss / ds$df
# and also you can easly get your sd
ds$s <- sqrt(ds$s2)

And the correct answer is:

           keybl           ss df           s2          s
1 Chen12ChinDesp 2.508800e+00  2 1.254400e+00   1.120000
2 Hans11SwedDesp 8.099454e+05  3 2.699818e+05 519.597740
3 Mato10GermDesp 4.860000e+02  6 8.100000e+01   9.000000
4 Vest02DenmDesp 8.106832e+02 16 5.066770e+01   7.118125
5 Wawa07ChinDesp 2.720000e-04 17 1.600000e-05   0.004000

This looks much less concise than other methods (like 42-'s answer) but if you unroll those in terms of how many R commands are actually being executed this is much more concise. For a short problem like this either way is fine but I thought I'd show you the method that uses the most vector math. It also highlights why those convenient implicit loop functions are available, for expressiveness. If you used for loops to accomplish the same then the temptation would be stronger to put everything in the loop. This can be a bad idea in R.

like image 124
John Avatar answered Oct 14 '22 18:10

John


The pooled SD under the assumption of independence (so the covariance terms can be assumed to be zero) will be: sqrt( sum_over_groups[ (var)/sum(n)-N_groups)] )

     lapply( split(dat, dat$keybl), 
          function(dd) sqrt( sum( dd$sd^2 * (dd$n-1) )/(sum(dd$n-1)-nrow(dd)) ) )
#-------------------------
$Chen12ChinDesp
[1] 1.583919

$Hans11SwedDesp
[1] Inf

$Mato10GermDesp
[1] 11.0227

$Vest02DenmDesp
[1] 9.003795

$Wawa07ChinDesp
[1] 0.004123106
like image 34
IRTFM Avatar answered Oct 14 '22 16:10

IRTFM