Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is Random Forest with a single tree much better than a Decision Tree classifier?

I apply the decision tree classifier and the random forest classifier to my data with the following code:

def decision_tree(train_X, train_Y, test_X, test_Y):

    clf = tree.DecisionTreeClassifier()
    clf.fit(train_X, train_Y)

    return clf.score(test_X, test_Y)


def random_forest(train_X, train_Y, test_X, test_Y):
    clf = RandomForestClassifier(n_estimators=1)
    clf = clf.fit(X, Y)

    return clf.score(test_X, test_Y)

Why the result are so much better for the random forest classifier (for 100 runs, with randomly sampling 2/3 of data for the training and 1/3 for the test)?

100%|███████████████████████████████████████| 100/100 [00:01<00:00, 73.59it/s]
Algorithm: Decision Tree
  Min     : 0.3883495145631068
  Max     : 0.6476190476190476
  Mean    : 0.4861783113770316
  Median  : 0.48868030937802126
  Stdev   : 0.047158171852401135
  Variance: 0.0022238931724605985
100%|███████████████████████████████████████| 100/100 [00:01<00:00, 85.38it/s]
Algorithm: Random Forest
  Min     : 0.6846846846846847
  Max     : 0.8653846153846154
  Mean    : 0.7894823428836184
  Median  : 0.7906101571063208
  Stdev   : 0.03231671150915106
  Variance: 0.0010443698427656967

The random forest estimators with one estimator isn't just a decision tree? Have i done something wrong or misunderstood the concept?

like image 482
hallow_me Avatar asked Jan 13 '18 11:01

hallow_me


People also ask

Why is random forest better than other classifiers?

Random forests is great with high dimensional data since we are working with subsets of data. It is faster to train than decision trees because we are working only on a subset of features in this model, so we can easily work with hundreds of features.

Does random forests perform better than an extra tree classifier?

It seems that the Random Forest gets better results, but the difference is very small. As mentioned above, the variance of the Extra Trees is lower, but again, the difference is practically insignificant. It is remarkable that results vary more for datasets 4 and 5 where most of the features are redundant or repeated.

Why do we prefer a forest collection of trees rather than a single tree?

Image by author. The majority prediction from multiple trees is better than an individual tree prediction because the trees protect each other from their individual errors. This is, however, dependant on the trees being relatively uncorrelated with each other.


1 Answers

The random forest estimators with one estimator isn't just a decision tree?

Well, this is a good question, and the answer turns out to be no; the Random Forest algorithm is more than a simple bag of individually-grown decision trees.

Apart from the randomness induced from ensembling many trees, the Random Forest (RF) algorithm also incorporates randomness when building individual trees in two distinct ways, none of which is present in the simple Decision Tree (DT) algorithm.

The first is the number of features to consider when looking for the best split at each tree node: while DT considers all the features, RF considers a random subset of them, of size equal to the parameter max_features (see the docs).

The second is that, while DT considers the whole training set, a single RF tree considers only a bootstrapped sub-sample of it; from the docs again:

The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default).


The RF algorihm is essentially the combination of two independent ideas: bagging, and random selection of features (see the Wikipedia entry for a nice overview). Bagging is essentially my second point above, but applied to an ensemble; random selection of features is my first point above, and it seems that it had been independently proposed by Tin Kam Ho before Breiman's RF (again, see the Wikipedia entry). Ho had already suggested that random feature selection alone improves performance. This is not exactly what you have done here (you still use the bootstrap sampling idea from bagging, too), but you could easily replicate Ho's idea by setting bootstrap=False in your RandomForestClassifier() arguments. The fact is that, given this research, the difference in performance is not unexpected...

To replicate exactly the behaviour of a single tree in RandomForestClassifier(), you should use both bootstrap=False and max_features=None arguments, i.e.

clf = RandomForestClassifier(n_estimators=1, max_features=None, bootstrap=False)

in which case neither bootstrap sampling nor random feature selection will take place, and the performance should be roughly equal to that of a single decision tree.

like image 193
desertnaut Avatar answered Sep 21 '22 15:09

desertnaut