Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Linear algebra application in Machine Learning [closed]

I am learning Linear Algebra(started recently) and was curious to know its applications in Machine Learning, where can I read about this

Thank you

like image 696
daydreamer Avatar asked May 09 '11 23:05

daydreamer


People also ask

What are the applications of linear algebra in machine learning?

Linear Algebra is the mathematical foundation that solves the problem of representing data as well as computations in machine learning models. It is the math of arrays — technically referred to as vectors, matrices and tensors.

Which part of linear algebra is used in machine learning?

Matrix Decomposition or factorization is also an important part of linear algebra used in machine learning. Basically, it is a factorization of the matrix into a product of matrices.

Is linear algebra used in ML?

Linear algebra plays a vital role and key foundation in machine learning, and it enables ML algorithms to run on a huge number of datasets. The concepts of linear algebra are widely used in developing algorithms in machine learning.


2 Answers

In machine learning, we generally deal with data in form of vectors/matrices. Any statistical method used involves linear algebra as its integral part. Also, it is useful in data mining.
SVD and PCA are famous dimensionality reduction techniques involving linear algebra.
Bayesian decision theory also involves significant amount of LA.You can try it also.

like image 181
damned Avatar answered Oct 14 '22 07:10

damned


Linear Algebra provides the computational engine for the majority of Machine Learning algorithms.

For instance, probably the most conspicuous and most frequent application of ML is the recommendation engine.

Aside from data retrieval, the real crux of these algorithms is often 'reconstruction' of the ridiculously sparse data used as input for these engines. The raw data supplied to Amazon.com's user-based R/E is (probably) a massive data matrix in which the users are the rows and its products are represented in the columns. Therefore, to organically populate this matrix, every customer would have to purchase every product Amazon.com sells. Linear Algebra-based techniques are used here.

All of the techniques in current use involve some type of matrix decomposition, a fundamental class of linear algebra techniques (e.g., non-negative matrix approximation, and positive-maximum-margin-matrix approximation (warning link to pdf!) are perhaps the two most common)

Second, many if not most ML techniques rely on a numerical optimization technique. E.g., most supervised ML algorithms involve creation of a trained classifier/regressor by minimizing the delta between the value calculated by the nascent classifier and the actual value from the training data. This can be done either iteratively or using linear algebra techniques. If the latter, then the technique is usually SVD or some variant.

Third, the spectral-based decompositions--PCA (principal component analysis) and kernel PCA--are perhaps the most commonly used dimension-reduction techniques, often applied in a pre-processing step just ahead of the ML algorithm in the data flow, for instance, PCA is often used instance in a Kohonen Map to initialize the lattice. The principal insight underneath these techniques is that the eigenvectors of the covariance matrix (a square, symmetric matrix with zeros down the main diagonal, prepared from the original data matrix) are unit length and are orthogonal to each other.

like image 24
doug Avatar answered Oct 14 '22 08:10

doug