Ridge regression

For demonstration purposes, let's still try our ridge regression on a one-versus-rest method. To do this, create a MulticlassWrapper for a binary classification method. The classif.penalized.ridge method is from the penalized package, so make sure you have it installed:

    > ovr <- makeMulticlassWrapper("classif.penalized.ridge", 
mcw.method = "onevsrest")

Now let's go ahead and create a wrapper for our classifier that creates a bagging resample of 10 iterations (it is the default) with replacement, sampling 70% of the observations and all of the input features:

    > bag.ovr = makeBaggingWrapper(ovr, bw.iters = 10, #default of 10
bw.replace = TRUE, #default
bw.size = 0.7,
bw.feats = 1)

This can now be used to train our algorithm. Notice in the code I put mlr:: before train(). The reason is that caret also has a train() function, so we are specifying we want mlr train function and not caret's. Sometimes, if this is not done when both packages are loaded, you will end up with an egregious error:

    > set.seed(317)
> fitOVR <- mlr::train(bag.ovr, wine.task)
> predOVR <- predict(fitOVR, newdata = test)

Let's see how it did:

    > head(data.frame(predOVR))
truth response
60 2 2
78 2 2
79 2 2
49 1 1
19 1 1
69 2 2

> getConfMatrix(predOVR)
predicted
true 1 2 3 -SUM-
1 58 0 0 0
2 0 71 0 0
3 0 0 57 0
-SUM- 0 0 0 0

Again, it is just too easy. However, don't focus on the accuracy as much as the methodology of creating your classifier, tuning any parameters, and implementing a resampling strategy. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset