I am running the Decision Trees algorithm from SciKit Learn and I want to get the Feature_importance vector along with the features names so I can determine which features are dominant in the labeling process. Could you help me? Thank you.
Suppose that you have samples as rows of a pandas.DataFrame
:
from pandas import DataFrame
features = DataFrame({'f1': (1, 2, 2, 2), 'f2': (1, 1, 1, 1), 'f3': (3, 3, 1, 1)})
labels = ('a', 'a', 'b', 'b')
and then use a tree or a forest classifier:
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(features, labels)
Then the importances should match the frame columns:
for name, importance in zip(features.columns, classifier.feature_importances_):
print(name, importance)
# f1 0.0
# f2 0.0
# f3 1.0
A good suggestion by wrwrwr! Since the order of the feature importance values in the classifier's 'feature_importances_' property matches the order of the feature names in 'feature.columns', you can use the zip() function.
Further, it is also helpful to sort the features, and select the top N features to show.
Say you have created a classifier:
clf = DecisionTreeClassifier(random_state=0).fit(X_train,y_train)
Then you can print the top 5 features in descending order of importance:
for importance, name in sorted(zip(clf.feature_importances_, X_train.columns),reverse=True)[:5]:
print (name, importance)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With