Summary

In this chapter, we discussed the perceptron. Inspired by neurons, the perceptron is a linear model for binary classification. The perceptron classifies instances by processing a linear combination of the explanatory variables and weights with an activation function. While a perceptron with a logistic sigmoid activation function is the same model as logistic regression, the perceptron learns its weights using an online, error-driven algorithm. The perceptron can be used effectively in some problems. Like the other linear classifiers that we have discussed, the perceptron is not a universal function approximator; it can only separate the instances of one class from the instances of the other using a hyperplane. Some datasets are not linearly separable; that is, no possible hyperplane can classify all of the instances correctly. In the following chapters, we will discuss two models that can be used with linearly inseparable data: the artificial neural network, which creates a universal function approximator from a graph of perceptrons and the support vector machine, which projects the data onto a higher dimensional space in which it is linearly separable.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset