An overview of the techniques

We will now get to an overview of the techniques, covering the regression and classification trees, random forests, and gradient boosting. This will set the stage for the practical business cases.

Regression trees

To establish an understanding of tree-based methods, it is probably easier to start with a quantitative outcome and then move on to how it works in a classification problem. The essence of a tree is that the features are partitioned, starting with the first split that improves the RSS the most. These binary splits continue until the termination of the tree. Each subsequent split/partition is not done on the entire dataset but only on the portion of the prior split that it falls under. This top-down process is referred to as recursive partitioning. It is also a process that is greedy, a term you may stumble upon in reading about the machine learning methods. Greedy means that during each split in the process, the algorithm looks for the greatest reduction in the RSS without a regard as to how well it will perform on the later partitions. The result is that you may end up with a full tree of unnecessary branches leading to a low bias but a high variance. To control this effect, you need to appropriately prune the tree to an optimal size after building a full tree.

Figure 6.1 provides a visual of this technique in action. The data is hypothetical with 30 observations, a response ranging from 1 to 10, and two predictor features, both ranging in value from 0 to 10 named X1 and X2. The tree has three splits leading to four terminal nodes. Each split is basically an if-then statement or uses an R syntax ifelse(). The first split is if X1 is less than 3.5, then the response is split into four observations with an average value of 2.4 and the remaining 26 observations. This left branch of four observations is a terminal node as any further splits would not substantially improve the RSS. The predicted value for these four observations in that partition of the tree becomes the average. The next split is at X2 < 4 and finally, X1 < 7.5.

An advantage of this method is that it can handle highly nonlinear relationships; however, can you see a couple of potential problems? The first issue is that an observation is given the average of the terminal node under which it falls. This can hurt the overall predictive performance (high bias). Conversely, if you keep partitioning the data further and further so as to achieve a low bias, high variance can become an issue. As with the other methods, you can use cross-validation to select the appropriate tree depth size.

Regression trees

Figure 6.1: Regression Tree with 3 splits and 4 terminal nodes and the corresponding node average and number of observations

Classification trees

Classification trees operate under the same principal as regression trees except that the splits are not determined by the RSS but an error rate. The error rate used is not what you would expect where the calculation is simply the misclassified observations divided by the total observations. As it turns out, when it comes to tree-splitting, a misclassification rate by itself may lead to a situation where you can gain information with a further split but not improve the misclassification rate. Let's look at an example.

Suppose we have a node, let's call it N0 where you have seven observations labeled No and three observations labeled Yes and we can say that the misclassified rate is 30 percent. With this in mind, let's calculate a common alternative error measure called the Gini index. The formula for a single node Gini index is as follows:

Classification trees

Then, for N0, the Gini is 1 - (.7)2 - (.3)2, which is equal to 0.42, versus the misclassification rate of 30 percent.

Taking this example further, we will now create node N1 with 3 observations from Class 1 and none from Class 2, along with N2, which has 4 observations from Class 1 and three from Class 2. Now, the overall misclassification rate for this branch of the tree is still 30 percent, but look at how the overall Gini index has improved:

  • Gini(N1) = 1 – (3/3)2 – (0/3)2 = 0
  • Gini(N2) = 1 – (4/7)2 – (3/7)2 = 0.49
  • New Gini index = (proportion of N1 x Gini(N1)) + (proportion of N2 x Gini(N2)), which is equal to (.3 x 0) + (.7 x 0.49) or 0.343

By doing a split on a surrogate error rate, we actually improved our model impurity, reducing it from 0.42 to 0.343, whereas the misclassification rate did not change. This is the methodology that is used by the rpart() package, which we will be using in this chapter.

Random forest

To greatly improve our model's predictive ability, we can produce numerous trees and combine the results. The random forest technique does this by applying two different tricks in model development. The first is the use of bootstrap aggregation or bagging, as it is called.

In bagging, an individual tree is built on a random sample of the dataset, roughly two-thirds of the total observations (note that the remaining one-third are referred to as out-of-bag (oob)). This is repeated dozens or hundreds of times and the results are averaged. Each of these trees is grown and not pruned based on any error measure, and this means that the variance of each of these individual trees is high. However, by averaging the results, you can reduce the variance without increasing the bias.

The next thing that random forest brings to the table is that concurrently with the random sample of the data, that is, bagging, it also takes a random sample of the input features at each split. In the randomForest package, we will use the default random number of the predictors that are sampled, which, for classification problems, is the square root of the total predictors and for regression, it is the total number of the predictors divided by three. The number of predictors the algorithm randomly chooses at each split can be changed via the model tuning process.

By doing this random sample of the features at each split and incorporating it into the methodology, you can mitigate the effect of a highly correlated predictor becoming the main driver in all of your bootstrapped trees, preventing you from reducing the variance that you hoped to achieve with bagging. The subsequent averaging of the trees that are less correlated to each other is more generalizable and robust to outliers than if you only performed bagging.

Gradient boosting

Boosting methods can become extremely complicated to learn and understand, but you should keep in mind what is fundamentally happening behind the curtain. The main idea is to build an initial model of some kind (linear, spline, tree, and so on) called the base learner, examine the residuals, and fit a model based on these residuals around the so-called loss function. A loss function is merely the function that measures the discrepancy between the model and desired prediction, for example, a squared error for regression or the logistic function for classification. The process continues until it reaches some specified stopping criterion. This is sort of like the student who takes a practice exam and gets 30 out of 100 questions wrong and as a result, studies only these 30 questions that were missed. In the next practice exam, they get 10 out of those 30 wrong and so only focus on those 10 questions, and so on. If you would like to explore the theory behind this further, a great resource for you is available in Frontiers in Neurorobotics, Gradient boosting machines, a tutorial, Natekin A., Knoll A. (2013), at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3885826/.

As just mentioned, boosting can be applied to many different base learners, but here we will only focus on the specifics of tree-based learning. Each tree iteration is small and we will determine how small with one of the tuning parameters referred to as interaction depth. In fact, it may be as small as one split, which is referred to as a stump.

Trees are sequentially fit to the residuals, according to the loss function, up to the number of trees that we specified (our stopping criterion).

There is another tuning parameter that we will need to identify and this is shrinkage. You can think of shrinkage as the rate at which your model is learning generally and as the contribution of each tree or stump to the model specifically. This learning rate acts as a regularization parameter, similar to what we discussed in Chapter 4, Advanced Feature Selection in Linear Models.

The other thing about our boosting algorithm is that it is stochastic, meaning that it adds randomness by taking a random sample of data at each iteration of the algorithm used in each iteration of the tree. Introducing some randomness to a boosted model usually improves the accuracy and speed and reduces the overfitting (Friedman, 2002).

As you may have guessed, tuning these parameters can be quite a challenge. These parameters can interact with each other, and if you just tinker with one without considering the other, your model may actually perform worse. The caret package will help us in this endeavor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset