Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Beginner's resources/introductions to classification algorithms [closed]

everybody. I am entirely new to the topic of classification algorithms, and need a few good pointers about where to start some "serious reading". I am right now in the process of finding out, whether machine learning and automated classification algorithms could be a worthwhile thing to add to some application of mine.

I already scanned through "How to Solve It: Modern heuristics" by Z. Michalewicz and D. Fogel (in particular, the chapters about linear classifiers using neuronal networks), and on the practical side, I am currently looking through the WEKA toolkit source code. My next (planned) step would be to dive into the realm of Bayesian classification algorithms.

Unfortunately, I am lacking a serious theoretical foundation in this area (let alone, having used it in any way as of yet), so any hints at where to look next would be appreciated; in particular, a good introduction of available classification algorithms would be helpful. Being more a craftsman and less a theoretician, the more practical, the better...

Hints, anyone?

like image 848
Dirk Avatar asked May 01 '10 13:05

Dirk


People also ask

Can we use ML as AI?

Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values.

Which one of these classification algorithms is easiest to start with for prediction?

With the Naive Bayes Classifier algorithm, predicting the class of the testing data set is more effortless. A good bet for multi-class predictions as well. Though it requires conditional independence assumption, Naïve Bayes Classifier has performed well in various application domains.

What are classification algorithms?

A classification algorithm, in general, is a function that weighs the input features so that the output separates one class into positive values and the other into negative values.


2 Answers

I've always found Andrew Moore's Tutorials to be very useful. They're grounded in solid statistical theory and will be very useful in understanding papers if you choose to read them in the future. Here's a short description:

These include classification algorithms such as decision trees, neural nets, Bayesian classifiers, Support Vector Machines and cased-based (aka non-parametric) learning. They include regression algorithms such as multivariate polynomial regression, MARS, Locally Weighted Regression, GMDH and neural nets. And they include other data mining operations such as clustering (mixture models, k-means and hierarchical), Bayesian networks and Reinforcement Learning

like image 143
Jacob Avatar answered Sep 20 '22 19:09

Jacob


The answer referring to Andrew Moore's tutorials is a good one. I'd like to augment it, however, by suggesting some reading on the need which drives the creation of many classification systems in the first place: identification of causal relationships. This is relevant to many modeling problems involving statistical inference.

The best current resource I know of for learning about causality and classifier systems (especially Bayesian classifiers) is Judea Pearl's book "Causality: models, reasoning, and inference".

like image 26
Joel Hoff Avatar answered Sep 21 '22 19:09

Joel Hoff