Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Getting feature_importances_ after getting optimal TPOT pipeline?

I've read through a few pages but need someone to help explain how to make this work for.

I'm using TPOTRegressor() to get an optimal pipeline, but from there I would love to be able to plot the .feature_importances_ of the pipeline it returns:

best_model = TPOTRegressor(cv=folds, generations=2, population_size=10, verbosity=2, random_state=seed) #memory='./PipelineCache',       memory='auto',
best_model.fit(X_train, Y_train)
feature_importance = best_model.fitted_pipeline_.steps[-1][1].feature_importances_

I saw this kind of set up from a now closed issue on Github, but currently I get the error:

Best pipeline: LassoLarsCV(input_matrix, normalize=True)

Traceback (most recent call last):
  File "main2.py", line 313, in <module>
    feature_importance = best_model.fitted_pipeline_.steps[-1][1].feature_importances_
AttributeError: 'LassoLarsCV' object has no attribute 'feature_importances_'

So, how would I get these feature importances from the optimal pipeline, regardless of which one it lands on? Or is this even possible? Or does someone have a better way of going about trying to plot feature importances from a TPOT run?

Thanks!

UPDATE

For clarification, what is meant by Feature Importance is the determination of how important each feature (X's) of your dataset is in determining the predicted (Y) label, using a barchart to plot each feature's level of importance in coming up with its predictions. TPOT doesn't do this directly (I don't think), so I was thinking I'd grab the pipeline it came up with, re-run it on the training data, and then somehow use a .feature_imprtances_ to then be able to graph the feature importances, as these are all sklearn regressor's I'm using?

like image 542
Matt Wilson Avatar asked Mar 04 '23 15:03

Matt Wilson


1 Answers

Very nice question.

You just need to fit again the best model in order to get the feature importances.

best_model.fit(X_train, Y_train)
exctracted_best_model = best_model.fitted_pipeline_.steps[-1][1]

The last line returns the best model based on the CV.

You can then use:

exctracted_best_model.fit(X_train, Y_train) 

to train it. If the best model has the desired attribure, then you will be able to access it after exctracted_best_model.fit(X_train, Y_train)


More details (in my comments) and a Toy example:

from tpot import TPOTRegressor
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                    train_size=0.75, test_size=0.25)
# reduce training features for time sake
X_train = X_train[:100,:] 
y_train = y_train[:100]

# Fit the TPOT pipeline
tpot = TPOTRegressor(cv=2, generations=5, population_size=50, verbosity=2)

# Fit the pipeline
tpot.fit(X_train, y_train)

# Get the best model
exctracted_best_model = tpot.fitted_pipeline_.steps[-1][1]

print(exctracted_best_model)
AdaBoostRegressor(base_estimator=None, learning_rate=0.5, loss='square',
         n_estimators=100, random_state=None)

# Train the `exctracted_best_model` using THE WHOLE DATASET.
# You need to use the whole dataset in order to get feature importance for all the
# features in your dataset.
exctracted_best_model.fit(X, y) # X,y IMPORTNANT

# Access it's features
exctracted_best_model.feature_importances_

# Plot them using barplot
# Here I fitted the model on X_train, y_train and not on the whole dataset for TIME SAKE
# So I got importances only for the features in `X_train`
# If you use `exctracted_best_model.fit(X, y)` we will have importances for all the features !!!
positions= range(exctracted_best_model.feature_importances_.shape[0])
plt.bar(positions, exctracted_best_model.feature_importances_)
plt.show()

IMPORTNANT NOTE: *In the above example, the best model based on the pipeline was AdaBoostRegressor(base_estimator=None, learning_rate=0.5, loss='square'). This model indeed has the attribute feature_importances_. In the case where the best model does not have an attribute feature_importances_, the exact same code will not work. You will need to read the docs and see the attributes of each returned best model. E.g. if the best model was LassoCV then you would use the coef_ attribute.

Output:

enter image description here

like image 50
seralouk Avatar answered Apr 06 '23 05:04

seralouk