Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

scikit KernelPCA unstable results

I'm trying to use KernelPCA for reducing the dimensionality of a dataset to 2D (both for visualization purposes and for further data analysis).

I experimented computing KernelPCA using a RBF kernel at various values of Gamma, but the result is unstable:

anim

(each frame is a slightly different value of Gamma, where Gamma is varying continuously from 0 to 1)

Looks like it is not deterministic.

Is there a way to stabilize it/make it deterministic?

Code used to generate transformed data:

def pca(X, gamma1):
    kpca = KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=gamma1)
    X_kpca = kpca.fit_transform(X)
    #X_back = kpca.inverse_transform(X_kpca)
    return X_kpca
like image 923
fferri Avatar asked Jul 01 '15 23:07

fferri


1 Answers

KernelPCA should be deterministic and evolve continuously with gamma.
It is different from RBFSampler that does have built-in randomness in order to provide an efficient (more scalable) approximation of the RBF kernel.

However what can change in KernelPCA is the order of the principal components: in scikit-learn they are returned sorted in order of descending eigenvalue, so if you have 2 eigenvalues close to each other it could be that the order changes with gamma.

My guess (from the gif) is that this is what is happening here: the axes along which you are plotting are not constant so your data seems to jump around.

Could you provide the code you used to produce the gif?

I'm guessing it is a plot of the data points along the 2 first principal components but it would help to see how you produced it.

You could try to further inspect it by looking at the values of kpca.alphas_ (the eigenvectors) for each value of gamma.

Hope this makes sense.

EDIT: As you noted it looks like the points are reflected against the axis, the most plausible explanation is that one of the eigenvector flips sign (note this does not affect the eigenvalue).

I put in a simple gist to reproduce the issue (you'll need a Jupyter notebook to run it). You can see the sign-flipping when you change the value of gamma.

As a complement note that this kind of discrepancy happens only because you fit several times the KernelPCA object several times. Once you settled with a particular gamma value and you've fit kpca once you can call transform several times and get consistent results. For the classical PCA the docs mention that:

Due to implementation subtleties of the Singular Value Decomposition (SVD), which is used in this implementation, running fit twice on the same matrix can lead to principal components with signs flipped (change in direction). For this reason, it is important to always use the same estimator object to transform data in a consistent fashion.

I don't know about the behavior of a single KernelPCA object that you would fit several times (I did not find anything relevant in the docs).

It does not apply to your case though as you have to fit the object with several gamma values.

like image 122
ldirer Avatar answered Sep 28 '22 19:09

ldirer