Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Weka's PCA is taking too long to run

I am trying to use Weka for feature selection using PCA algorithm.

My original feature space contains ~9000 attributes, in 2700 samples.
I tried to reduce dimensionality of the data using the following code:

AttributeSelection selector = new AttributeSelection();
PrincipalComponents pca = new PrincipalComponents();
Ranker ranker = new Ranker();
selector.setEvaluator(pca);
selector.setSearch(ranker);
Instances instances = SamplesManager.asWekaInstances(trainSet);
try { 
    selector.SelectAttributes(instances);
    return SamplesManager.asSamplesList(selector.reduceDimensionality(instances));
} catch (Exception e ) {
            ...
}

However, It did not finish to run within 12 hours. It is stuck in the method selector.SelectAttributes(instances);.

My questions are: Is so long computation time expected for weka's PCA? Or am I using PCA wrongly?

If the long run time is expected:
How can I tune the PCA algorithm to run much faster? Can you suggest an alternative? (+ example code how to use it)?

If it is not:
What am I doing wrong? How should I invoke PCA using weka and get my reduced dimensionality?

Update: The comments confirms my suspicion that it is taking much more time than expected.
I'd like to know: How can I get PCA in java - using weka or an alternative library.
Added a bounty for this one.

like image 435
amit Avatar asked Jul 14 '12 08:07

amit


2 Answers

After deepening in the WEKA code, the bottle neck is creating the covariance matrix, and then calculating the eigenvectors for this matrix. Even trying to switch to sparsed matrix implementation (I used COLT's SparseDoubleMatrix2D) did not help.

The solution I came up with was first reduce the dimensionality using a first fast method (I used information gain ranker, and filtering based on document frequencey), and then use PCA on the reduced dimensionality to reduce it farther.

The code is more complex, but it essentially comes down to this:

Ranker ranker = new Ranker();
InfoGainAttributeEval ig = new InfoGainAttributeEval();
Instances instances = SamplesManager.asWekaInstances(trainSet);
ig.buildEvaluator(instances);
firstAttributes = ranker.search(ig,instances);
candidates = Arrays.copyOfRange(firstAttributes, 0, FIRST_SIZE_REDUCTION);
instances = reduceDimenstions(instances, candidates)
PrincipalComponents pca = new PrincipalComponents();
pca.setVarianceCovered(var);
ranker = new Ranker();
ranker.setNumToSelect(numFeatures);
selection = new AttributeSelection();
selection.setEvaluator(pca);
selection.setSearch(ranker);
selection.SelectAttributes(instances );
instances = selection.reduceDimensionality(wekaInstances);

However, this method scored worse then using a greedy information gain and a ranker, when I cross-validated for estimated accuracy.

like image 107
amit Avatar answered Nov 11 '22 02:11

amit


It looks like you're using the default configuration for the PCA, which judging by the long runtime, it is likely that it is doing way too much work for your purposes.

Take a look at the options for PrincipalComponents.

  1. I'm not sure if -D means they will normalize it for you or if you have to do it yourself. You want your data to be normalized (centered about the mean) though, so I would do this yourself manually first.
  2. -R sets the amount of variance you want accounted for. Default is 0.95. The correlation in your data might not be good so try setting it lower to something like 0.8.
  3. -A sets the maximum number of attributes to include. I presume the default is all of them. Again, you should try setting it to something lower.

I suggest first starting out with very lax settings (e.g. -R=0.1 and -A=2) then working your way up to acceptable results.

like image 4
tskuzzy Avatar answered Nov 11 '22 03:11

tskuzzy