Limitations of the perceptron

While the perceptron classified the instances in our example well, the model has limitations. Linear models like the perceptron with a Heaviside activation function are not universal function approximators; they cannot represent some functions. Specifically, linear models can only learn to approximate the functions for linearly separable datasets. The linear classifiers that we have examined find a hyperplane that separates the positive classes from the negative classes; if no hyperplane exists that can separate the classes, the problem is not linearly separable.

A simple example of a function that is linearly inseparable is the logical operation XOR, or exclusive disjunction. The output of XOR is one when one of its inputs is equal to one and the other is equal to zero. The inputs and outputs of XOR are plotted in two dimensions in the following graph. When XOR outputs 1, the instance is marked with a circle; when XOR outputs 0, the instance is marked with a diamond, as shown in the following figure:

Limitations of the perceptron

It is impossible to separate the circles from the diamonds using a single straight line. Suppose that the instances are pegs on a board. If you were to stretch a rubber band around both of the positive instances, and stretch a second rubber band around both of the negative instances, the bands would intersect in the middle of the board. The rubber bands represent convex hulls, or the envelope that contains all of the points within the set and all of the points along any line connecting a pair points within the set. Feature representations are more likely to be linearly separable in higher dimensional spaces than lower dimensional spaces. For instance, text classification problems tend to be linearly separable when high-dimensional representations like the bag-of-words are used.

In the next two chapters, we will discuss techniques that can be used to model linearly inseparable data. The first technique, called kernelization, projects linearly inseparable data to a higher dimensional space in which it is linearly separable. Kernelization can be used in many models, including perceptrons, but it is particularly associated with support vector machines, which we will discuss in the next chapter. Support vector machines also support techniques that can find the hyperplane that separates linearly inseparable classes with the fewest errors. The second technique creates a directed graph of perceptrons. The resulting model, called an artificial neural network, is a universal function approximator; we will discuss artificial neural networks in Chapter 10, From the Perceptron to Artificial Neural Networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset