Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python pandas function applied to all pairwise combinations of rows

I'm trying to run a function (correlation) on all pairwise combinations of rows within a pandas dataframe:

stats = dict()
for l in itertools.combinations(dat.index.tolist(),2):
    stats[l] = pearsonr(dat.loc[l[0],:], dat.loc[l[1],:]) # stores (r, p)

Of course this is quite slow, and I'm wondering how to do the equivalent through the use of something like apply() or otherwise.

Note: I know I can directly find the correlation of the dataframe with the pandas corr() function, however it doesn't return the associated p-value (which I need for filtering purposes)

like image 255
Constantino Avatar asked Jan 20 '16 15:01

Constantino


1 Answers

This should get you some speed-up. Define a function Pearson, modified from the docs in Primer's link:

def Pearson(r, n=len(dat)):
    r = max(min(r, 1.0), -1.0)
    df = n - 2
    if abs(r) == 1.0:
        prob = 0.0
    else:
        t_squared = r**2 * (df / ((1.0 - r) * (1.0 + r)))
        prob = betai(0.5*df, 0.5, df/(df+t_squared))

    return (r,prob)

Use applymap which does element-wise operations on dat.corr. You are passing the correlation coefficient r to Pearson:

np.random.seed(10)
dat = pd.DataFrame(np.random.randn(5, 5))
dat[0] = np.arange(5) # seed two correlated cols
dat[1] = np.arange(5) # ^^^

dat.corr().applymap(Pearson)

    0   1   2   3   4
0   (1.0, 0.0)  (1.0, 0.0)  (0.713010395675, 0.176397305541)    (0.971681374885, 0.00569624513678)  (0.0188249871501, 0.97603269768)
1   (1.0, 0.0)  (1.0, 0.0)  (0.713010395675, 0.176397305541)    (0.971681374885, 0.00569624513678)  (0.0188249871501, 0.97603269768)
2   (0.713010395675, 0.176397305541)    (0.713010395675, 0.176397305541)    (1.0, 0.0)  (0.549623945218, 0.337230071385)    (-0.280514871109, 0.647578381153)
3   (0.971681374885, 0.00569624513678)  (0.971681374885, 0.00569624513678)  (0.549623945218, 0.337230071385)    (1.0, 0.0)  (0.176622737448, 0.77629170593)
4   (0.0188249871501, 0.97603269768)    (0.0188249871501, 0.97603269768)    (-0.280514871109, 0.647578381153)   (0.176622737448, 0.77629170593)     (1.0, 0.0)

You do see speedup with this method when dat is large, but it's still pretty slow because of the element-wise operations.

np.random.seed(10)
dat = pd.DataFrame(np.random.randn(100, 100))

%%timeit
dat.corr().applymap(Pearson)

10 loops, best of 3: 118 ms per loop

%%timeit
stats = dict()

for l in combinations(dat.index.tolist(),2):
    stats[l] = pearsonr(dat.loc[l[0],:], dat.loc[l[1],:])

1 loops, best of 3: 1.56 s per loop
like image 85
Kevin Avatar answered Nov 20 '22 07:11

Kevin