Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Random Forests interpretability

I have been using the sklearn RandomForestClassifier to solve a binary classification problem.

For a particular sample prediction, I would like to be able to know how to change the features' values to make the prediction change.

E.g. let's say I have an entry with [size = 15, width = 8, height = 13] and the model gives me aprobability = 0.9 to be of class 1. What I would like to be able to say is "change size from 15 to 10" and then your probability=0.1 for example.

Then optimally, what I would like is the smallest "gradient" in the features values to move from one class to another (or the one that gives the most change in probability).

Maybe I'm wrong but from what I've read the packages LIME and TreeInterpreter do not provide this kind of information?

like image 928
user130104 Avatar asked Nov 08 '22 02:11

user130104


1 Answers

Partial dependence plots approximate the dependence between the target and a particular independent variable marginalised on all other independent variables.

While it doesn't give exact gradient at all points. It helps us get an intuition of the variable behavior.

You can find more about it here: https://scikit-learn.org/stable/modules/generated/sklearn.inspection.plot_partial_dependence.html

like image 159
mahesh ghanta Avatar answered Nov 15 '22 11:11

mahesh ghanta