Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I get the components for LDA in scikit-learn?

Tags:

When using PCA in sklearn, it's easy to get out the components:

from sklearn import decomposition
pca = decomposition.PCA(n_components=n_components)
pca_data = pca.fit(input_data)
pca_components = pca.components_

But I can't for the life of me figure out how to get the components out of LDA, as there is no components_ attribute. Is there a similar attribute in sklearn lda?

like image 977
Joe Avatar asked Dec 20 '12 13:12

Joe


People also ask

How many parameters are necessary for LDA on this dataset?

Notice, in case of LDA, the transform method takes two parameters: the X_train and the y_train . However in the case of PCA, the transform method only requires one parameter i.e. X_train .

How do you decide the N_components in the linear discriminant analysis ()?

To figure out what argument value to use with n_components (e.g. how many parameters to keep), we can take advantage of the fact that explained_variance_ratio_ tells us the variance explained by each outputted feature and is a sorted array.


1 Answers

In the case of PCA, the documentation is clear. The pca.components_ are the eigenvectors.

In the case of LDA, we need the lda.scalings_ attribute.


Visual example using iris data and sklearn:

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis


iris = datasets.load_iris()
X = iris.data
y = iris.target
#In general it is a good idea to scale the data
scaler = StandardScaler()
scaler.fit(X)
X=scaler.transform(X)

lda = LinearDiscriminantAnalysis()
lda.fit(X,y)
x_new = lda.transform(X)   

Verify that the lda.scalings_ are the eigenvectors:

print(lda.scalings_)
print(lda.transform(np.identity(4)))

[[-0.67614337  0.0271192 ]
 [-0.66890811  0.93115101]
 [ 3.84228173 -1.63586613]
 [ 2.17067434  2.13428251]]

[[-0.67614337  0.0271192 ]
 [-0.66890811  0.93115101]
 [ 3.84228173 -1.63586613]
 [ 2.17067434  2.13428251]]

Additionally here is a useful function to plot the biplot and verify visually:

def myplot(score,coeff,labels=None):
    xs = score[:,0]
    ys = score[:,1]
    n = coeff.shape[0]

    plt.scatter(xs ,ys, c = y) #without scaling
    for i in range(n):
        plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5)
        if labels is None:
            plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, "Var"+str(i+1), color = 'g', ha = 'center', va = 'center')
        else:
            plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center')

plt.xlabel("LD{}".format(1))
plt.ylabel("LD{}".format(2))
plt.grid()

#Call the function. 
myplot(x_new[:,0:2], lda.scalings_) 
plt.show()

Results

Results

like image 114
seralouk Avatar answered Sep 19 '22 12:09

seralouk