Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

python feature selection in pipeline: how determine feature names?

i used pipeline and grid_search to select the best parameters and then used these parameters to fit the best pipeline ('best_pipe'). However since the feature_selection (SelectKBest) is in the pipeline there has been no fit applied to SelectKBest.

I need to know the feature names of the 'k' selected features. Any ideas how to retrieve them? Thank you in advance

from sklearn import (cross_validation, feature_selection, pipeline,
                     preprocessing, linear_model, grid_search)
folds = 5
split = cross_validation.StratifiedKFold(target, n_folds=folds, shuffle = False, random_state = 0)

scores = []
for k, (train, test) in enumerate(split):

    X_train, X_test, y_train, y_test = X.ix[train], X.ix[test], y.ix[train], y.ix[test]

    top_feat = feature_selection.SelectKBest()

    pipe = pipeline.Pipeline([('scaler', preprocessing.StandardScaler()),
                                 ('feat', top_feat),
                                 ('clf', linear_model.LogisticRegression())])

    K = [40, 60, 80, 100]
    C = [1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001]
    penalty = ['l1', 'l2']

    param_grid = [{'feat__k': K,
                  'clf__C': C,
                  'clf__penalty': penalty}]

    scoring = 'precision'

    gs = grid_search.GridSearchCV(estimator=pipe, param_grid = param_grid, scoring = scoring)
    gs.fit(X_train, y_train)

    best_score = gs.best_score_
    scores.append(best_score)

    print "Fold: {} {} {:.4f}".format(k+1, scoring, best_score)
    print gs.best_params_

best_pipe = pipeline.Pipeline([('scale', preprocessing.StandardScaler()),
                          ('feat', feature_selection.SelectKBest(k=80)),
                          ('clf', linear_model.LogisticRegression(C=.0001, penalty='l2'))])

best_pipe.fit(X_train, y_train)
best_pipe.predict(X_test)
like image 765
figgy Avatar asked Oct 27 '15 18:10

figgy


2 Answers

You can access the feature selector by name in best_pipe:

features = best_pipe.named_steps['feat']

Then you can call transform() on an index array to get the names of the selected columns:

X.columns[features.transform(np.arange(len(X.columns)))]

The output here will be the eighty column names selected in the pipeline.

like image 99
jakevdp Avatar answered Oct 24 '22 07:10

jakevdp


Jake's answer totally works. But depending on what feature selector your using, there's another option that I think is cleaner. This one worked for me:

X.columns[features.get_support()]

It gave me an identical answer to Jake's answer. And you can see more about it in the docs, but get_support returns an array of true/false values for whether or not the column was used. Also, it's worth noting that X must be of identical shape to the training data used on the feature selector.

like image 32
bwest87 Avatar answered Oct 24 '22 08:10

bwest87