Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

doing PCA on very large data set in R

Tags:

r

pca

bigdata

I have a very large training set (~2Gb) in a CSV file. The file is too large to read directly into memory (read.csv() brings the computer to a halt) and I would like to reduce the size of the data file using PCA. The problem is that (as far as I can tell) I need to read the file into memory in order to run a PCA algorithm (e.g., princomp()).

I have tried the bigmemory package to read the file in as a big.matrix, but princomp doesn't function on big.matrix objects and it doesn't seem like big.matrix can be converted into something like a data.frame.

Is there a way of running princomp on a large data file that I'm missing?

I'm a relative novice at R, so some of this may be obvious to more seasoned users (apologies in avance).

Thanks for any info.

like image 823
user141146 Avatar asked Sep 15 '12 01:09

user141146


People also ask

Does PCA work on large data?

Yes, it is possible. If the data matrix does not fit into RAM, it is not the end of the world yet: there are efficient algorithms that can work with data stored on a hard drive. See e.g. randomized PCA as described in Halko et al., 2010, An algorithm for the principal component analysis of large data sets.

How do you perform PCA for data of very high dimensionality?

If you want to do PCA on the correlation matrix, you will need to standardize the columns of your data matrix before applying the SVD. This amounts to subtracting the means (centering) and then dividing by the standard deviations (scaling). This will be the most efficient approach if you want the full PCA.

What is the maximum number of principal components?

In a data set, the maximum number of principal component loadings is a minimum of (n-1, p). Let's look at first 4 principal components and first 5 rows. 3. In order to compute the principal component score vector, we don't need to multiply the loading with data.


1 Answers

The way I solved it was by calculating the sample covariance matrix iteratively. In this way you only need a subset of the data for any point in time. Reading in just a subset of the data can be done using readLines where you open a connection to the file and read iteratively. The algorithm looks something like (it is a two-step algorithm):

Calculate the mean values per column (assuming that are the variables)

  1. Open file connection (con = open(...))
  2. Read 1000 lines (readLines(con, n = 1000))
  3. Calculate sums of squares per column
  4. Add those sums of squares to a variable (sos_column = sos_column + new_sos)
  5. Repeat 2-4 until end of file.
  6. Divide by number of rows minus 1 to get the mean.

Calculate the covariance matrix:

  1. Open file connection (con = open(...))
  2. Read 1000 lines (readLines(con, n = 1000))
  3. Calculate all cross-products using crossprod
  4. Save those crossproducts in a variable
  5. Repeat 2-4 until end of file.
  6. divide by the number of rows minus 1 to get the covariance.

When you have the covariance matrix, just call princomp with covmat = your_covmat and princomp will skip calulating the covariance matrix himself.

In this way the datasets you can process are much, much larger than your available RAM. During the iterations, the memory usage is roughly the memory the chunk takes (e.g. 1000 rows), after that the memory usage is limited to the covariance matrix (nvar * nvar doubles).

like image 85
Paul Hiemstra Avatar answered Sep 29 '22 16:09

Paul Hiemstra