Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can sklearn random forest directly handle categorical features?

Say I have a categorical feature, color, which takes the values

['red', 'blue', 'green', 'orange'],

and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them.

I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.

like image 807
hahdawg Avatar asked Jul 12 '14 16:07

hahdawg


People also ask

Can random forest handle categorical features?

One advantage of decision tree based methods like random forests is their ability to natively handle categorical predictors without having to first transform them (e.g., by using feature engineering techniques).

Can random forest handle categorical variables in Python?

Unlike many other nonlinear estimators, random forests can be fit in one sequence, with cross-validation being performed along the way. pipe is a new black box created with 2 components: 1. A constructor to handle inputs with categorical variables and transform into a correct type, and 2.

Can you use categorical variables in random forest regression?

Random Forest is one of the most widely used machine learning algorithm for classification. It can also be used for regression model (i.e. continuous target variable) but it mainly performs well on classification model (i.e. categorical target variable).


3 Answers

No, there isn't. Somebody's working on this and the patch might be merged into mainline some day, but right now there's no support for categorical variables in scikit-learn except dummy (one-hot) encoding.

like image 143
Fred Foo Avatar answered Oct 23 '22 01:10

Fred Foo


Most implementations of random forest (and many other machine learning algorithms) that accept categorical inputs are either just automating the encoding of categorical features for you or using a method that becomes computationally intractable for large numbers of categories.

A notable exception is H2O. H2O has a very efficient method for handling categorical data directly which often gives it an edge over tree based methods that require one-hot-encoding.

This article by Will McGinnis has a very good discussion of one-hot-encoding and alternatives.

This article by Nick Dingwall and Chris Potts has a very good discussion about categorical variables and tree based learners.

like image 39
denson Avatar answered Oct 23 '22 02:10

denson


You have to make the categorical variable into a series of dummy variables. Yes I know its annoying and seems unnecessary but that is how sklearn works. if you are using pandas. use pd.get_dummies, it works really well.

like image 16
Hemanth Kondapalli Avatar answered Oct 23 '22 01:10

Hemanth Kondapalli