Modeling in R

In this example, we will use the rpart package, which is used to build a decision tree. The tree with the minimum prediction error is selected. After that, the tree is applied to make predictions for unlabeled data with the predict function.

One way to call rpart is to give it a list of variables and see what happens. Although we have discussed missing values, rpart has built-in code for dealing with missing values. So let's dive in, and look at the code.

Firstly, we need to call the libraries that we need:

library(rpart) 
library(rpart.plot)
library(caret)
library(e1071)
library(arules)

Next, let's load in the data, which will be in the AdultUCI variable:

data("AdultUCI");
AdultUCI
## 75% of the sample size
sample_size <- floor(0.80 * nrow(AdultUCI))

## set the seed to make your partition reproductible
set.seed(123)

## Set a variable to have the sample size
training.indicator <- sample(seq_len(nrow(AdultUCI)), size = sample_size)

# Set up the training and test sets of data
adult.training <- AdultUCI[training.indicator, ]
adult.test <- AdultUCI[-training.indicator, ]

## set up the most important features
features <- AdultUCI$income ~ AdultUCI$age+AdultUCI$education+AdultUCI$"education-num"

# Let's use the training data to test the model
model<-rpart(features,data=adult.training) 

# Now, let's use the test data to predict the model's efficiency
pred<-predict(model, adult.test ,type="class")

# Let's print the model
print(model)

# Results
#1) root 32561 7841 small (0.7591904 0.2408096)  
#2) AdultUCI$"education-num"< 12.5 24494 3932 small (0.8394709 0.1605291) *
#  3) AdultUCI$"education-num">=12.5 8067 3909 small (0.5154332 0.4845668)  
#6) AdultUCI$age< 29.5 1617  232 small (0.8565244 0.1434756) *
#  7) AdultUCI$age>=29.5 6450 2773 large (0.4299225 0.5700775) *

printcp(model)

plotcp(model)
summary(model)
print(pred)
summary(pred)

# plot tree
plot(model, uniform=TRUE,
     main="Decision Tree for Adult data")
text(model, use.n=TRUE, all=TRUE, cex=.8)


prp(model, faclen = 0, cex = 0.5, extra = 1)

We can see the final result in the following diagram:

Modeling in R

Analyzing the results of the decision tree

The decision tree grows from top to bottom. It starts with a root decision node. The branches from this node represent two—or possibly more—different options that are available to the decision makers.

At the end of the branches, we can find one of two things. Firstly, we may find an end node, which represents a fixed value. It can be understood as a stop in the decision process. Alternatively, we may find an uncertainty node, which has further possible outcomes available to it. If we were to add the probabilities of the uncertainty nodes together, they would sum to 1. Eventually, all of the branches will end in an end node.

Decision trees have inputs and outputs. In this example, we have provided the decision tree with a series of data. In R, they also output a number of data. Decision trees are useful because they provide easy model interpretation, and they also demonstrate the relevant importance of the variables.

Let's take a look at some of the main points of the results. From the output, we can see the following table, called Variable importance:

Variable importance
AdultUCI$"education-num"       AdultUCI$education             AdultUCI$age 
44                       40                       16 

This tells us that education-num has the highest percentage of importance, closely followed by education. Now, we could do some further analysis that would explore the correlation between these two items. If they are highly correlated, then we may consider removing one of them.

Next, we have the results for each of the nodes. In this section, we get the number of observations, probabilities, and the splits for each of the nodes, until the nodes reach an end.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset