Terminology used in decision trees

Decision Trees do not have much machinery as compared with logistic regression. Here we have a few metrics to study. We will majorly focus on impurity measures; decision trees split variables recursively based on set impurity criteria until they reach some stopping criteria (minimum observations per terminal node, minimum observations for split at any node, and so on):

  • Entropy: Entropy came from information theory and is the measure of impurity in data. If the sample is completely homogeneous, the entropy is zero, and if the sample is equally divided, it has entropy of one. In decision trees, the predictor with most heterogeneousness will be considered nearest to the root node to classify the given data into classes in a greedy mode. We will cover this topic in more depth in this chapter:

Where n = number of classes. Entropy is maximum in the middle, with a value of 1 and minimum at the extremes with a value of 0. The low value of entropy is desirable, as it will segregate classes better.

  • Information Gain: Information gain is the expected reduction in entropy caused by partitioning the examples according to a given attribute. The idea is to start with mixed classes and to continue partitioning until each node reaches its observations of purest class. At every stage, the variable with maximum information gain is chosen in a greedy fashion.

Information Gain = Entropy of Parent - sum (weighted % * Entropy of Child)

Weighted % = Number of observations in particular child/sum (observations in all child nodes)

  • Gini: Gini impurity is a measure of misclassification, which applies in a multi-class classifier context. Gini works similar to entropy, except Gini is quicker to calculate:

Where i = Number of classes. The similarity between Gini and entropy is shown in the following figure:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset