Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using Scikit-Learn's SVR, how do you combine categorical and continuous features in predicting the target?

I want to use support vector machine to solve a regression problem to predict the income of teachers based on a few features which is a mixture of categorical and continuous. For example, I have [white, asian, hispanic, black], # years teaching, and years of education.

For the categorical, I utilized sci-kit's preprocessing module, and hotcoded the 4 races. In this case, it would look something like [1,0,0,0] for a white teacher, and hence I have an array of {[1,0,0,0], [0,1,0,0],...[0,0,1,0], [1,0,0,0]} representing the races of each teacher encoded for SVR. I can perform a regression with just race vs. income, i.e.:

clf= SVR(C=1.0)
clf.fit(racearray, income) 

I can also perform a regression using the quantitative features as well. However, I don't know how to combine the features together, i.e.

continousarray(zip(yearsteaching,yearseduction))
clf.fit((racearray, continousarray), income)
like image 796
Zooey Lee Avatar asked Sep 12 '25 21:09

Zooey Lee


2 Answers

You can use scikit-learn's OneHotEncoder. If your data are in numpy array "racearray" and the columns are

[ contionus_feature1, contious_feature2, categorical, continous_feature3]

your code should look like (keep in mind that numpy enumeration starts with 0)

from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(categorical_features=[2])
race_encoded = enc.fit_transform(racearay)

you then can have a look your race_encode array as usual and use it in SVR as

clf= SVR(C=1.0)
clf.fit(race_encoded, income) 
like image 143
lanenok Avatar answered Sep 14 '25 12:09

lanenok


You can easily get dummies for your categorical features and then start modelling:

let's say you have some numerical and categorical features in df:

cat_columns = ['white', 'asian', 'hispanic', 'black']
df_encoded = pd.get_dummies(df, columns = cat_columns)

and fit:

X = df[df.columns[:-1]].values
y = df[df.columns[-1]].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) 
svmregr = make_pipeline(StandardScaler(), SVR(C=1.0, epsilon=0.2))
svmregr.fit(X_train,y_train)
svmregr.score(X_test,y_test)}
like image 26
Ahmad Pour Avatar answered Sep 14 '25 12:09

Ahmad Pour