Modeling class probabilities via logistic regression

Although the perceptron rule offers a nice and easygoing introduction to machine learning algorithms for classification, its biggest disadvantage is that it never converges if the classes are not perfectly linearly separable. The classification task in the previous section would be an example of such a scenario. Intuitively, we can think of the reason as the weights are continuously being updated since there is always at least one misclassified sample present in each epoch. Of course, you can change the learning rate and increase the number of epochs, but be warned that the perceptron will never converge on this dataset. To make better use of our time, we will now take a look at another simple yet more powerful algorithm for linear and binary classification problems: logistic regression. Note that, in spite of its name, logistic regression is a model for classification, not regression.

Logistic regression intuition and conditional probabilities

Logistic regression is a classification model that is very easy to implement but performs very well on linearly separable classes. It is one of the most widely used algorithms for classification in industry. Similar to the perceptron and Adaline, the logistic regression model in this chapter is also a linear model for binary classification that can be extended to multiclass classification, for example, via the OvR technique.

To explain the idea behind logistic regression as a probabilistic model, let's first introduce the odds ratio: the odds in favor of a particular event. The odds ratio can be written as Logistic regression intuition and conditional probabilities where Logistic regression intuition and conditional probabilities stands for the probability of the positive event. The term positive event does not necessarily mean good, but refers to the event that we want to predict, for example, the probability that a patient has a certain disease; we can think of the positive event as class label Logistic regression intuition and conditional probabilities. We can then further define the logit function, which is simply the logarithm of the odds ratio (log-odds):

Logistic regression intuition and conditional probabilities

Note that log refers to the natural logarithm, as it is the common convention in computer science. The logit function takes as input values in the range 0 to 1 and transforms them to values over the entire real-number range, which we can use to express a linear relationship between feature values and the log-odds:

Logistic regression intuition and conditional probabilities

Here, Logistic regression intuition and conditional probabilities is the conditional probability that a particular sample belongs to class 1 given its features x.

Now, we are actually interested in predicting the probability that a certain sample belongs to a particular class, which is the inverse form of the logit function. It is also called logistic sigmoid function, sometimes simply abbreviated to sigmoid function due to its characteristic S-shape:

Logistic regression intuition and conditional probabilities

Here z is the net input, the linear combination of weights and sample features, Logistic regression intuition and conditional probabilities.

Note

Note that similar to the convention we used in Chapter 2, Training Simple Machine Learning Algorithms for Classification, Logistic regression intuition and conditional probabilities refers to the bias unit, and is an additional input value that we provide Logistic regression intuition and conditional probabilities, which is set equal to 1.

Now let us simply plot the sigmoid function for some values in the range -7 to 7 to see how it looks:

>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> def sigmoid(z):
...     return 1.0 / (1.0 + np.exp(-z))
>>> z = np.arange(-7, 7, 0.1)
>>> phi_z = sigmoid(z)
>>> plt.plot(z, phi_z)
>>> plt.axvline(0.0, color='k')
>>> plt.ylim(-0.1, 1.1)
>>> plt.xlabel('z')
>>> plt.ylabel('$phi (z)$')
>>> # y axis ticks and gridline
>>> plt.yticks([0.0, 0.5, 1.0])
>>> ax = plt.gca()
>>> ax.yaxis.grid(True)
>>> plt.show()

As a result of executing the previous code example, we should now see the S-shaped (sigmoidal) curve:

Logistic regression intuition and conditional probabilities

We can see that Logistic regression intuition and conditional probabilities approaches 1 if z goes towards infinity (Logistic regression intuition and conditional probabilities) since Logistic regression intuition and conditional probabilities becomes very small for large values of z. Similarly, Logistic regression intuition and conditional probabilities goes towards 0 for Logistic regression intuition and conditional probabilities as a result of an increasingly large denominator. Thus, we conclude that this sigmoid function takes real number values as input and transforms them into values in the range [0, 1] with an intercept at Logistic regression intuition and conditional probabilities.

To build some intuition for the logistic regression model, we can relate it to Chapter 2, Training Simple Machine Learning Algorithms for Classification. In Adaline, we used the identity function Logistic regression intuition and conditional probabilities as the activation function. In logistic regression, this activation function simply becomes the sigmoid function that we defined earlier. The difference between Adaline and logistic regression is illustrated in the following figure:

Logistic regression intuition and conditional probabilities

The output of the sigmoid function is then interpreted as the probability of a particular sample belonging to class 1,Logistic regression intuition and conditional probabilities, given its features x parameterized by the weights w. For example, if we compute Logistic regression intuition and conditional probabilities for a particular flower sample, it means that the chance that this sample is an Iris-versicolor flower is 80 percent. Therefore, the probability that this flower is an Iris-setosa flower can be calculated as Logistic regression intuition and conditional probabilities or 20 percent. The predicted probability can then simply be converted into a binary outcome via a threshold function:

Logistic regression intuition and conditional probabilities

If we look at the preceding plot of the sigmoid function, this is equivalent to the following:

Logistic regression intuition and conditional probabilities

In fact, there are many applications where we are not only interested in the predicted class labels, but where the estimation of the class-membership probability is particularly useful (the output of the sigmoid function prior to applying the threshold function). Logistic regression is used in weather forecasting, for example, not only to predict if it will rain on a particular day but also to report the chance of rain. Similarly, logistic regression can be used to predict the chance that a patient has a particular disease given certain symptoms, which is why logistic regression enjoys great popularity in the field of medicine.

Learning the weights of the logistic cost function

You learned how we could use the logistic regression model to predict probabilities and class labels; now, let us briefly talk about how we fit the parameters of the model, for instance the weights w. In the previous chapter, we defined the sum-squared-error cost function as follows:

Learning the weights of the logistic cost function

We minimized this function in order to learn the weights w for our Adaline classification model. To explain how we can derive the cost function for logistic regression, let's first define the likelihood L that we want to maximize when we build a logistic regression model, assuming that the individual samples in our dataset are independent of one another. The formula is as follows:

Learning the weights of the logistic cost function

In practice, it is easier to maximize the (natural) log of this equation, which is called the log-likelihood function:

Learning the weights of the logistic cost function

Firstly, applying the log function reduces the potential for numerical underflow, which can occur if the likelihoods are very small. Secondly, we can convert the product of factors into a summation of factors, which makes it easier to obtain the derivative of this function via the addition trick, as you may remember from calculus.

Now we could use an optimization algorithm such as gradient ascent to maximize this log-likelihood function. Alternatively, let's rewrite the log-likelihood as a cost function J that can be minimized using gradient descent as in Chapter 2, Training Simple Machine Learning Algorithms for Classification:

Learning the weights of the logistic cost function

To get a better grasp of this cost function, let us take a look at the cost that we calculate for one single-sample training instance:

Learning the weights of the logistic cost function

Looking at the equation, we can see that the first term becomes zero if Learning the weights of the logistic cost function, and the second term becomes zero if Learning the weights of the logistic cost function:

Learning the weights of the logistic cost function

Let's write a short code snippet to create a plot that illustrates the cost of classifying a single-sample instance for different values of Learning the weights of the logistic cost function:

>>> def cost_1(z):
...     return - np.log(sigmoid(z))
>>> def cost_0(z):
...     return - np.log(1 - sigmoid(z))
>>> z = np.arange(-10, 10, 0.1)
>>> phi_z = sigmoid(z)
>>> c1 = [cost_1(x) for x in z]
>>> plt.plot(phi_z, c1, label='J(w) if y=1')
>>> c0 = [cost_0(x) for x in z]
>>> plt.plot(phi_z, c0, linestyle='--', label='J(w) if y=0')
>>> plt.ylim(0.0, 5.1)
>>> plt.xlim([0, 1])
>>> plt.xlabel('$phi$(z)')
>>> plt.ylabel('J(w)')
>>> plt.legend(loc='best')
>>> plt.show()

The resulting plot shows the sigmoid activation on the x axis, in the range 0 to 1 (the inputs to the sigmoid function were z values in the range -10 to 10) and the associated logistic cost on the y-axis:

Learning the weights of the logistic cost function

We can see that the cost approaches 0 (continuous line) if we correctly predict that a sample belongs to class 1. Similarly, we can see on the y-axis that the cost also approaches 0 if we correctly predict Learning the weights of the logistic cost function (dashed line). However, if the prediction is wrong, the cost goes towards infinity. The main point is that we penalize wrong predictions with an increasingly larger cost.

Converting an Adaline implementation into an algorithm for logistic regression

If we were to implement logistic regression ourselves, we could simply substitute the cost function J in our Adaline implementation from Chapter 2, Training Simple Machine Learning Algorithms for Classification with the new cost function:

Converting an Adaline implementation into an algorithm for logistic regression

We use this to compute the cost of classifying all training samples per epoch. Also, we need to swap the linear activation function with the sigmoid activation and change the threshold function to return class labels 0 and 1 instead of -1 and 1. If we make those three changes to the Adaline code, we would end up with a working logistic regression implementation, as shown here:

class LogisticRegressionGD(object):
    """Logistic Regression Classifier using gradient descent.

    Parameters
    ------------
    eta : float
      Learning rate (between 0.0 and 1.0)
    n_iter : int
      Passes over the training dataset.
    random_state : int
      Random number generator seed for random weight
      initialization.


    Attributes
    -----------
    w_ : 1d-array
      Weights after fitting.
    cost_ : list
      logistic cost function value in each epoch.

    """
    def __init__(self, eta=0.05, n_iter=100, random_state=1):
        self.eta = eta
        self.n_iter = n_iter
        self.random_state = random_state

    def fit(self, X, y):
        """ Fit training data.

        Parameters
        ----------
        X : {array-like}, shape = [n_samples, n_features]
          Training vectors, where n_samples is the number of 
          samples and
          n_features is the number of features.
        y : array-like, shape = [n_samples]
          Target values.

        Returns
        -------
        self : object

        """
        rgen = np.random.RandomState(self.random_state)
        self.w_ = rgen.normal(loc=0.0, scale=0.01, 
                              size=1 + X.shape[1])
        self.cost_ = []

        for i in range(self.n_iter):
            net_input = self.net_input(X)
            output = self.activation(net_input)
            errors = (y - output)
            self.w_[1:] += self.eta * X.T.dot(errors)
            self.w_[0] += self.eta * errors.sum()
            
            # note that we compute the logistic `cost` now
            # instead of the sum of squared errors cost
            cost = (-y.dot(np.log(output)) - 
                    ((1 - y).dot(np.log(1 - output))))
            self.cost_.append(cost)
        return self
    
    def net_input(self, X):
        """Calculate net input"""
        return np.dot(X, self.w_[1:]) + self.w_[0]

    def activation(self, z):
        """Compute logistic sigmoid activation"""
        return 1. / (1. + np.exp(-np.clip(z, -250, 250)))

    def predict(self, X):
        """Return class label after unit step"""
        return np.where(self.net_input(X) >= 0.0, 1, 0)
        # equivalent to:
        # return np.where(self.activation(self.net_input(X)) 
        #                 >= 0.5, 1, 0)

When we fit a logistic regression model, we have to keep in mind that it only works for binary classification tasks. So, let us consider only Iris-setosa and Iris-versicolor flowers (classes 0 and 1) and check that our implementation of logistic regression works:

>>> X_train_01_subset = X_train[(y_train == 0) | (y_train == 1)]
>>> y_train_01_subset = y_train[(y_train == 0) | (y_train == 1)]
>>> lrgd = LogisticRegressionGD(eta=0.05, 
...                             n_iter=1000,
...                             random_state=1)
>>> lrgd.fit(X_train_01_subset,
...          y_train_01_subset)
>>> plot_decision_regions(X=X_train_01_subset, 
...                              y=y_train_01_subset,
...                              classifier=lrgd)
>>> plt.xlabel('petal length [standardized]')
>>> plt.ylabel('petal width [standardized]')
>>> plt.legend(loc='upper left')
>>> plt.show()

The resulting decision region plot looks as follows:

Converting an Adaline implementation into an algorithm for logistic regression

Note

The gradient descent learning algorithm for logistic regression

Using calculus, we can show that the weight update in logistic regression via gradient descent is equal to the equation that we used in Adaline in Chapter 2, Training Simple Machine Learning Algorithms for Classification. However, please note that the following derivation of the gradient descent learning rule is intended for readers who are interested in the mathematical concepts behind the gradient descent learning rule for logistic regression. It is not essential for following the rest of this chapter.

Let's start by calculating the partial derivative of the log-likelihood function with respect to the jth weight:

Converting an Adaline implementation into an algorithm for logistic regression

Before we continue, let's also calculate the partial derivative of the sigmoid function:

Converting an Adaline implementation into an algorithm for logistic regression

Note

Now, we can re-substitute Converting an Adaline implementation into an algorithm for logistic regression in our first equation to obtain the following:

Converting an Adaline implementation into an algorithm for logistic regression

Remember that the goal is to find the weights that maximize the log-likelihood so that we perform the update for each weight as follows:

Converting an Adaline implementation into an algorithm for logistic regression

Since we update all weights simultaneously, we can write the general update rule as follows: Converting an Adaline implementation into an algorithm for logistic regression

We define Converting an Adaline implementation into an algorithm for logistic regression as follows: Converting an Adaline implementation into an algorithm for logistic regression

Since maximizing the log-likelihood is equal to minimizing the cost function J that we defined earlier, we can write the gradient descent update rule as follows: Converting an Adaline implementation into an algorithm for logistic regression

Converting an Adaline implementation into an algorithm for logistic regression

This is equal to the gradient descent rule for Adaline in Chapter 2, Training Simple Machine Learning Algorithms for Classification.

Training a logistic regression model with scikit-learn

We just went through useful coding and math exercises in the previous subsection, which helped illustrate the conceptual differences between Adaline and logistic regression. Now, let's learn how to use scikit-learn's more optimized implementation of logistic regression that also supports multi-class settings off the shelf (OvR by default). In the following code example, we will use the sklearn.linear_model.LogisticRegression class as well as the familiar fit method to train the model on all three classes in the standardized flower training dataset:

>>> from sklearn.linear_model import LogisticRegression
>>> lr = LogisticRegression(C=100.0, random_state=1)
>>> lr.fit(X_train_std, y_train)
>>> plot_decision_regions(X_combined_std,
...                       y_combined, 
...                       classifier=lr,
...                       test_idx=range(105, 150))
>>> plt.xlabel('petal length [standardized]')
>>> plt.ylabel('petal width [standardized]')
>>> plt.legend(loc='upper left')
>>> plt.show()

After fitting the model on the training data, we plotted the decision regions, training samples, and test samples, as shown in the following figure:

Training a logistic regression model with scikit-learn

Looking at the preceding code that we used to train the LogisticRegression model, you might now be wondering, "What is this mysterious parameter C?" We will discuss this parameter in the next subsection, where we first introduce the concepts of overfitting and regularization. However, before we are moving on to those topics, let's finish our discussion of class-membership probabilities.

The probability that training examples belong to a certain class can be computed using the predict_proba method. For example, we can predict the probabilities of the first three samples in the test set as follows:

>>> lr.predict_proba(X_test_std[:3, :])

This code snippet returns the following array:

array([[  3.20136878e-08,   1.46953648e-01,   8.53046320e-01],
       [  8.34428069e-01,   1.65571931e-01,   4.57896429e-12],
       [  8.49182775e-01,   1.50817225e-01,   4.65678779e-13]])

The first row corresponds to the class-membership probabilities of the first flower, the second row corresponds to the class-membership probabilities of the second flower, and so forth. Notice that the columns sum all up to one, as expected (you can confirm this by executing lr.predict_proba(X_test_std[:3, :]).sum(axis=1)). The highest value in the first row is approximately 0.853, which means that the first sample belongs to class three (Iris-virginica) with a predicted probability of 85.7 percent. So, as you may have already noticed, we can get the predicted class labels by identifying the largest column in each row, for example, using NumPy's argmax function:

>>> lr.predict_proba(X_test_std[:3, :]).argmax(axis=1)

The returned class indices are shown here (they correspond to Iris-virginica, Iris-setosa, and Iris-setosa):

array([2, 0, 0])

The class labels we obtained from the preceding conditional probabilities is, of course, just a manual approach to calling the predict method directly, which we can quickly verify as follows:

>>> lr.predict(X_test_std[:3, :])
array([2, 0, 0])

Lastly, a word of caution if you want to predict the class label of a single flower sample: scikit-learn expects a two-dimensional array as data input; thus, we have to convert a single row slice into such a format first. One way to convert a single row entry into a two-dimensional data array is to use NumPy's reshape method to add a new dimension, as demonstrated here:

>>> lr.predict(X_test_std[0, :].reshape(1, -1)) 
array([2])

Tackling overfitting via regularization

Overfitting is a common problem in machine learning, where a model performs well on training data but does not generalize well to unseen data (test data). If a model suffers from overfitting, we also say that the model has a high variance, which can be caused by having too many parameters that lead to a model that is too complex given the underlying data. Similarly, our model can also suffer from underfitting (high bias), which means that our model is not complex enough to capture the pattern in the training data well and therefore also suffers from low performance on unseen data.

Although we have only encountered linear models for classification so far, the problem of overfitting and underfitting can be best illustrated by comparing a linear decision boundary to more complex, nonlinear decision boundaries as shown in the following figure:

Tackling overfitting via regularization

Note

Variance measures the consistency (or variability) of the model prediction for a particular sample instance if we were to retrain the model multiple times, for example, on different subsets of the training dataset. We can say that the model is sensitive to the randomness in the training data. In contrast, bias measures how far off the predictions are from the correct values in general if we rebuild the model multiple times on different training datasets; bias is the measure of the systematic error that is not due to randomness.

One way of finding a good bias-variance tradeoff is to tune the complexity of the model via regularization. Regularization is a very useful method to handle collinearity (high correlation among features), filter out noise from data, and eventually prevent overfitting. The concept behind regularization is to introduce additional information (bias) to penalize extreme parameter (weight) values. The most common form of regularization is so-called L2 regularization (sometimes also called L2 shrinkage or weight decay), which can be written as follows:

Tackling overfitting via regularization

Here, Tackling overfitting via regularization is the so-called regularization parameter.

Note

Regularization is another reason why feature scaling such as standardization is important. For regularization to work properly, we need to ensure that all our features are on comparable scales.

The cost function for logistic regression can be regularized by adding a simple regularization term, which will shrink the weights during model training:

Tackling overfitting via regularization

Via the regularization parameter Tackling overfitting via regularization, we can then control how well we fit the training data while keeping the weights small. By increasing the value of Tackling overfitting via regularization, we increase the regularization strength.

The parameter C that is implemented for the LogisticRegression class in scikit-learn comes from a convention in support vector machines, which will be the topic of the next section. The term C is directly related to the regularization parameter Tackling overfitting via regularization, which is its inverse. Consequently, decreasing the value of the inverse regularization parameter C means that we are increasing the regularization strength, which we can visualize by plotting the L2-regularization path for the two weight coefficients:

>>> weights, params = [], []
>>> for c in np.arange(-5, 5):
...     lr = LogisticRegression(C=10.**c, random_state=1)
...     lr.fit(X_train_std, y_train)
...     weights.append(lr.coef_[1])
...     params.append(10.**c)
>>> weights = np.array(weights)
>>> plt.plot(params, weights[:, 0],
...          label='petal length')
>>> plt.plot(params, weights[:, 1], linestyle='--',
...          label='petal width')
>>> plt.ylabel('weight coefficient')
>>> plt.xlabel('C')
>>> plt.legend(loc='upper left')
>>> plt.xscale('log')
>>> plt.show()

By executing the preceding code, we fitted ten logistic regression models with different values for the inverse-regularization parameter C. For the purposes of illustration, we only collected the weight coefficients of class 1 (here, the second class in the dataset, Iris-versicolor) versus all classifiers—remember that we are using the OvR technique for multiclass classification.

As we can see in the resulting plot, the weight coefficients shrink if we decrease parameter C, that is, if we increase the regularization strength:

Tackling overfitting via regularization

Note

Since an in-depth coverage of the individual classification algorithms exceeds the scope of this book, I strongly recommend Logistic Regression: From Introductory to Advanced Concepts and Applications, Dr. Scott Menard's, Sage Publications, 2009, to readers who want to learn more about logistic regression.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset