I am trying to compute a matrix of correlation, and filter the correlations based on the p-values to find out the highly correlated pairs.
To explain what I mean, say I have a data frame like this.
df
A B C D
0 2 NaN 2 -2
1 NaN 1 1 1.1
2 1 NaN NaN 3.2
3 -4 NaN 2 2
4 NaN 1 2.1 NaN
5 NaN 3 1 1
6 3 NaN 0 NaN
For the correlation coefficient. I used pd.corr(). This method can process data frame with NaN values, and more importantly, it tolerates pair of columns having 0 overlap (col A and col B):
rho = df.corr()
A B C D
A 1.000000 NaN -0.609994 0.041204
B NaN 1.0 -0.500000 -1.000000
C -0.609994 -0.5 1.000000 0.988871
D 0.041204 -1.0 0.988871 1.000000
The challenge is to compute p-value. I didn't find a built-in method to do this. However from pandas columns correlation with statistical significance, @BKay provided a loop way to compute the p-value. This method will complain error if there are fewer than 3 overlaps.So I did some modification by adding error exception.
ValueError: zero-size array to reduction operation maximum which has no identity
pval = rho.copy()
for i in range(df.shape[1]): # rows are the number of rows in the matrix.
for j in range(df.shape[1]):
try:
df_ols = pd.ols(y=df.iloc[:,i], x=df.iloc[:,j], intercept=True)
pval.iloc[i,j] = df_ols.f_stat['p-value']
except ValueError:
pval.iloc[i,j] = None
pval
A B C D
A 0.000000 NaN 0.582343 0.973761
B NaN 0.000000 0.666667 NaN
C 0.582343 0.666667 0.000000 0.011129
D 0.973761 NaN 0.011129 0.000000
This method outputs a p-value matrix, but it gets extremely slow when the size of the original data frame increase (my real data frame is ~ 5000 rows x 500 columns). What would you suggest to do to get this p-value matrix efficiently for a large size data frame.
This question turned out to be a good solution.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With