Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Measure of Feature Importance in PCA

I am doing Principle Component Analysis (PCA) and I'd like to find out which features that contribute the most to the result.

My intuition is to sum up all the absolute values of the individual contribution of the features to the individual components.

import numpy as np
from sklearn.decomposition import PCA

X = np.array([[-1, -1, 4, 1], [-2, -1, 4, 2], [-3, -2, 4, 3], [1, 1, 4, 4], [2, 1, 4, 5], [3, 2, 4, 6]])
pca = PCA(n_components=0.95, whiten=True, svd_solver='full').fit(X)
pca.components_
array([[ 0.71417303,  0.46711713,  0.        ,  0.52130459],
       [-0.46602418, -0.23839061, -0.        ,  0.85205128]])
np.sum(np.abs(pca.components_), axis=0)
array([1.18019721, 0.70550774, 0.        , 1.37335586])

This yields, in my eyes, a measure of importance of each of the original features. Note that the 3rd feature has zero importance, because I intentionally created a column that is just a constant value.

Is there a better "measure of importance" for PCA?

like image 770
r0f1 Avatar asked Apr 21 '21 16:04

r0f1


People also ask

Can we get feature importance with PCA?

Principal Component Analysis (PCA) is a fantastic technique for dimensionality reduction, and can also be used to determine feature importance.

How do you measure a feature important?

The concept is really straightforward: We measure the importance of a feature by calculating the increase in the model's prediction error after permuting the feature. A feature is "important" if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction.

What is the measure of PCA?

PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.

Should I scale features before PCA?

The rule of thumb is that if your data is already on a different scale (e.g. every feature is XX per 100 inhabitants), scaling it will remove the information contained in the fact that your features have unequal variances. If the data is on different scales, then you should normalize it before running PCA.


2 Answers

The measure of importance for PCA is in explained_variance_ratio_. This array provides percentage of variance explained by each component. It is sorted by importance of the components in descending order and sums up to 1 when all the components are used, or minimal possible value above the requested threshold. In your example you set a threshold to 95% (of variance that should be explained), so the array sum will be 0.9949522861608583 as the first component explains 92.021143% and the second 7.474085% of the variance, hence the 2 components you receive.

components_ is the array that stores the directions of maximum variance in the feature space. It's dimensions are n_components_ by n_features_. This is what you multiply the data point(s) by when applying transform() to get reduced dimensionality projection of the data.

update

In order to get the percentage of contribution of the original features to each of the Principal Components, you just need to normalize components_, as they set the amount original vectors contribute to the projection.

r = np.abs(pca.components_.T)
r/r.sum(axis=0)

array([[0.41946155, 0.29941172],
       [0.27435603, 0.15316146],
       [0.        , 0.        ],
       [0.30618242, 0.54742682]])

As you can see third feature does not contribute to the PCs.

If you need the total contribution of the original features to the explained variance, you need to take into account each PC contribution (i.e. explained_variance_ratio_):

ev = np.abs(pca.components_.T).dot(pca.explained_variance_ratio_)
ttl_ev = pca.explained_variance_ratio_.sum()*ev/ev.sum()
print(ttl_ev)

[0.40908847 0.26463667 0.         0.32122715]
like image 69
igrinis Avatar answered Oct 09 '22 02:10

igrinis


If you just purely sum the PCs with np.sum(np.abs(pca.components_), axis=0), that assumes all PCs are equally important which is rarely true. To use PCA for crude feature selection, sum after discarding low-contribution PCs and/or after scaling the PCs by their relative contributions.

Here is a visual example that highlights why a plain sum doesn't work as desired.

Given 3 observations of 20 features (visualized as three 5x4 heatmaps):

>>> print(X.T)
[[2 1 1 1 1 1 1 1 1 4 1 1 1 4 1 1 1 1 1 2]
 [1 1 1 1 1 1 1 1 1 4 1 1 1 6 3 1 1 1 1 2]
 [1 1 1 2 1 1 1 1 1 5 2 1 1 5 1 1 1 1 1 2]]

original data

These are the resulting PCs:

>>> pca = PCA(n_components=None, whiten=True, svd_solver='full').fit(X.T)

principal components

Note that PC3 has high magnitude at (2,1), but if we check its explained variance, it offers ~0 contribution:

>>> print(pca.explained_variance_ratio_)
array([0.6638886943392722, 0.3361113056607279, 2.2971091700327738e-32])

This causes a feature selection discrepancy when summing the unscaled PCs (left) vs summing the PCs scaled by their explained variance ratios (right):

>>> unscaled = np.sum(np.abs(pca.components_), axis=0)
>>> scaled = np.sum(pca.explained_variance_ratio_[:, None] * np.abs(pca.components_), axis=0)

unscaled vs scaled PC sums

With the unscaled sum (left), the meaningless PC3 is still given 33% weight. This causes (2,1) to be considered the most important feature, but if we look back to the original data, (2,1) offers low discrimination between observations.

With the scaled sum (right), PC1 and PC2 respectively have 66% and 33% weight. Now (3,1) and (3,2) are the most important features which actually tracks with the original data.

like image 40
tdy Avatar answered Oct 09 '22 02:10

tdy