Interpreting support vector machine results

We now have a support vector machine fitted; what should we do next? Well, let's think about the final objective here: we want to understand which features are the most influential in defining the probability of a customer going into default by not repaying their bills. 

We can do this by looking at the coefficients defining our hyperplane. Unfortunately, we do not have these coefficients directly stored within the support_vector_machine_linear object, but we indeed have in this object all of the elements of the list of support vectors and their coefficients, that is, a way to measure their position relative to the hyperplane. 

It turns out that starting from these two groups of values, we can compute the weights of features as a matrix multiplication of the two, as follows:

weights <- t(support_vector_machine_linear$coefs) %*% support_vector_machine_linear$SV 

I'm not expecting all of this to be clear to you, but you should just keep in mind that it is possible from svm() output to get the final weights determining the hyperplane. Let's take a look at those weights and try to figure out their meaning. We could actually also start comparing them with the messages coming from the logistic regression.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset