When we are using any decision tree algorithm and our data set consists of numerical values.
I have found that the results provided by the program splits the node on values that are not even exist in the data set
Example:
Classifications Results
where as the in my dataset there is no value for attrib2 like 3.76179. Why it is like that?
Most decision tree building algorithms (J48, C4.5, CART, ID3) work as follows:
Once you've found the best split point, algorithms disagree on how represent it. Example: say you have -4 (Yes), -3 (Yes) , -3 (Yes), -2 (No), -1 (No). Any value between -3 and -2 will have the same purity. Some algorithms (C4.5) will say val <= -3. Others, e.g. Weka, will choose the average and give val <= -2.5.
There are several ways to choose an attribute. And not all of them choose values in the data set.
A common one (though a bit simplistic) is to take the mean. It is possible that 3.76179... is the mean of all attrib2 of your data set.
For example, if your data set is 1 dimensional, and is made of the value -10, -9, .. -2, -1, 1, 2, ..9, 10
then a good splitting value would be 0
, even though it's not in your data set.
Another possibility, especially if you're dealing with random forests (several decision trees) is that the splitting value is chosen at random, with a probability distribution centered around the median value. Some algorithms decide to split according to a gaussian centered on the mean/median value and with deviation equal to the standard deviation of the data set.
First you can check how to discretise numeric value. Those algorithms split numeric value range into several interval each of which has big infogain. For example you go with step 0.1 after each split you check its infogain and select best position, then you continue with spited intervals.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With