Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keep same dummy variable in training and testing data

I am building a prediction model in python with two separate training and testing sets. The training data contains numerical type categorical variable, e.g., zip code,[91521,23151,12355, ...], and also string categorical variables, e.g., city ['Chicago', 'New York', 'Los Angeles', ...].

To train the data, I first use the 'pd.get_dummies' to get dummy variable of these variable, and then fit the model with the transformed training data.

I do the same transformation on my test data and predict the result using the trained model. However, I got the error

ValueError: Number of features of the model must  match the input. Model n_features is 1487 and  input n_features is 1345

The reason is because there are fewer dummy variables in the test data because it has fewer 'city' and 'zipcode'.

How can I solve this problem? For example, 'OneHotEncoder' will only encode all numerical type categorical variable. 'DictVectorizer()' will only encode all string type categorical variable. I search on line and see a few similar questions but none of them really addresses my question.

Handling categorical features using scikit-learn

https://www.quora.com/If-the-training-dataset-has-more-variables-than-the-test-dataset-what-does-one-do

https://www.quora.com/What-is-the-best-way-to-do-a-binary-one-hot-one-of-K-coding-in-Python

like image 270
nimning Avatar asked Dec 26 '16 19:12

nimning


People also ask

Why do you leave one dummy variable out?

By dropping a dummy variable column, we can avoid this trap. This example shows two categories, but this can be expanded to any number of categorical variables. In general, if we have number of categories, we will use dummy variables. Dropping one dummy variable to protect from the dummy variable trap.

Can you have more than one dummy variable?

If you have a nominal variable that has more than two levels, you need to create multiple dummy variables to "take the place of" the original nominal variable. For example, imagine that you wanted to predict depression from year in school: freshman, sophomore, junior, or senior.

Why is it important to use Drop_first true during dummy variable creation?

drop_first=True is important to use, as it helps in reducing the extra column created during dummy variable creation. Hence it reduces the correlations created among dummy variables.

What is dummy variable trap?

The Dummy variable trap is a scenario where there are attributes that are highly correlated (Multicollinear) and one variable predicts the value of others. When we use one-hot encoding for handling the categorical data, then one dummy variable (attribute) can be predicted with the help of other dummy variables.


4 Answers

You can also just get the missing columns and add them to the test dataset:

# Get missing columns in the training test
missing_cols = set( train.columns ) - set( test.columns )
# Add a missing column in test set with default value equal to 0
for c in missing_cols:
    test[c] = 0
# Ensure the order of column in the test set is in the same order than in train set
test = test[train.columns]

This code also ensure that column resulting from category in the test dataset but not present in the training dataset will be removed

like image 56
Thibault Clement Avatar answered Oct 20 '22 17:10

Thibault Clement


Assume you have identical feature's names in train and test dataset. You can generate concatenated dataset from train and test, get dummies from concatenated dataset and split it to train and test back.

You can do it this way:

import pandas as pd
train = pd.DataFrame(data = [['a', 123, 'ab'], ['b', 234, 'bc']],
                     columns=['col1', 'col2', 'col3'])
test = pd.DataFrame(data = [['c', 345, 'ab'], ['b', 456, 'ab']],
                     columns=['col1', 'col2', 'col3'])
train_objs_num = len(train)
dataset = pd.concat(objs=[train, test], axis=0)
dataset_preprocessed = pd.get_dummies(dataset)
train_preprocessed = dataset_preprocessed[:train_objs_num]
test_preprocessed = dataset_preprocessed[train_objs_num:]

In result, you have equal number of features for train and test dataset.

like image 29
Eduard Ilyasov Avatar answered Oct 20 '22 18:10

Eduard Ilyasov


train2,test2 = train.align(test, join='outer', axis=1, fill_value=0)

train2 and test2 have the same columns. Fill_value indicates the value to use for missing columns.

like image 22
user1482030 Avatar answered Oct 20 '22 18:10

user1482030


I have this in the past after having run get_dummies on both train and test sets

X_test = X_test.reindex(columns = X_train.columns, fill_value=0)

Obviously a little tweaking for the individual case. But, it throws away novel values in the test set and values missing from the test are filled in, in this case with all zeros.

like image 6
demongolem Avatar answered Oct 20 '22 18:10

demongolem