Chapter 3. Unsupervised Learning

Nowadays, it is a common assertion that huge amounts of data are available from the Internet for learning. If you read the previous chapters, you will see that even though supervised learning methods are very powerful in predicting future values based on the existing data, they have an obvious drawback: data must be curated; a human being should have annotated the target class for a certain number of instances. This labor is typically done by an expert (if you want to assign the correct species to iris flowers, you need somebody who knows about these flowers at least); it will probably take some time and money to complete, and it will typically not produce significant amounts of data (at least not compared with the Internet!). Every supervised learning building must stand on as much curated data as possible.

However, there are some things we can do without annotated data. Consider the case when you want to assign table seats in a wedding. You want to group people, putting similar people at the same table (the bride's family, the groom's friends, and so on). Anyone that has organized a wedding knows that this task, called Clustering in machine learning terminology, is not an easy one. Sometimes people belong to more than one group, and you have to decide if not so similar people can be together (for example, the bride and groom's parents). Clustering involves finding groups where all elements in the group are similar, but objects in different groups are not. What does it mean to be similar is a question every clustering method must answer. The other critical question is how to separate clusters. Humans are very good at finding clusters when faced with two-dimensional data (consider identifying cities in a map just based on the presence of streets), but things become more difficult as dimensions grow.

In this chapter we will present several approximations for clustering: k-means (probably the most popular clustering method), affinity propagation, mean shift, and a model-based method called Gaussian Mixture Models.

Another example of unsupervised learning is Dimensionality Reduction. Suppose we represent learning instances with a large number of attributes and want to visualize them to identify their principal patterns. This is very difficult when the number of features is more than three, simply because we cannot visualize more than three dimensions. Dimensionality Reduction methods present a way to represent data points of a high dimensional dataset in a lower dimensional space, keeping (at least partly) their pattern structure. These methods are also helpful in selecting the models we should use for learning. For example, if it is reasonable to approximate some supervised learning task using a linear hyperplane or should we resort to more complicated models.

Principal Component Analysis

Principal Component Analysis (PCA) is an orthogonal linear transformation that turns a set of possibly correlated variables into a new set of variables that are as uncorrelated as possible. The new variables lie in a new coordinate system such that the greatest variance is obtained by projecting the data in the first coordinate, the second greatest variance by projecting in the second coordinate, and so on. These new coordinates are called principal components; we have as many principal components as the number of original dimensions, but we keep only those with high variance. Each new principal component that is added to the principal components set must comply with the restriction that it should be orthogonal (that is, uncorrelated) to the remaining principal components. PCA can be seen as a method that reveals the internal structure of data; it supplies the user with a lower dimensional shadow of the original objects. If we keep only the first principal components, data dimensionality is reduced and thus it is easier to visualize the structure of data. If we keep, for example, only the first and second components, we can examine data using a two-dimensional scatter plot. As a result, PCA is useful for exploratory data analysis before building predictive models.

For our learning methods, PCA will allow us to reduce a high-dimensional space into a low-dimensional one while preserving as much variance as possible. It is an unsupervised method since it does not need a target class to perform its transformations; it only relies on the values of the learning attributes. This is very useful for two major purposes:

  • Visualization: Projecting a high-dimensional space, for example, into two dimensions will allow us to map our instances into a two-dimensional graph. Using these graphical visualizations, we can have insights about the distribution of instances and look at how separable instances from different classes are. In this section we will use PCA to transform and visualize a dataset.
  • Feature selection: Since PCA can transform instances from high to lower dimensions, we could use this method to address the curse of dimensionality. Instead of learning from the original set of features, we can transform our instances with PCA and then apply a learning algorithm on top of the new feature space.

As a working example, in this section we will use a dataset of handwritten digits digitalized in matrices of 8x8 pixels, so each instance will consist initially of 64 attributes. How can we visualize the distribution of instances? Visualizing 64 dimensions at the same time is impossible for a human being, so we will use PCA to reduce the instances to two dimensions and visualize its distribution in a two-dimensional scatter graph.

We start by loading our dataset (the digits dataset is one of the sample datasets provided with scikit-learn).

>>> from sklearn.datasets import load_digits
>>> digits = load_digits()
>>> X_digits, y_digits = digits.data, digits.target

If we print the digits keys, we get:

>>> print digits.keys()
['images', 'data', 'target_names', 'DESCR', 'target']

We will use the data matrix that has the instances of 64 attributes each and the target vector that has the corresponding digit number.

Let us print the digits to take a look at how the instances will appear:

>>> import matplotlib.pyplot as plt
>>> n_row, n_col = 2, 5
>>>
>>> def print_digits(images, y, max_n=10):
>>>     # set up the figure size in inches
>>>     fig = plt.figure(figsize=(2. * n_col, 2.26 * n_row))
>>>     i=0
>>>     while i < max_n and i < images.shape[0]:
>>>         p = fig.add_subplot(n_row, n_col, i + 1, xticks=[],
              yticks=[])
>>>         p.imshow(images[i], cmap=plt.cm.bone, 
              interpolation='nearest')
>>>         # label the image with the target value
>>>         p.text(0, -1, str(y[i]))
>>>         i = i + 1
>>>
>>> print_digits(digits.images, digits.target, max_n=10)

These instances can be seen in the following diagram:

Principal Component Analysis

Define a function that will plot a scatter with the two-dimensional points that will be obtained by a PCA transformation. Our data points will also be colored according to their classes. Recall that the target class will not be used to perform the transformation; we want to investigate if the distribution after PCA reveals the distribution of the different classes, and if they are clearly separable. We will use ten different colors for each of the digits, from 0 to 9.

>>> def plot_pca_scatter():
>>>     colors = ['black', 'blue', 'purple', 'yellow', 'white', 
          'red', 'lime', 'cyan', 'orange', 'gray']
>>>     for i in xrange(len(colors)):
>>>         px = X_pca[:, 0][y_digits == i]
>>>         py = X_pca[:, 1][y_digits == i]
>>>         plt.scatter(px, py, c=colors[i])
>>>     plt.legend(digits.target_names)
>>>     plt.xlabel('First Principal Component')
>>>     plt.ylabel('Second Principal Component')

At this point, we are ready to perform the PCA transformation. In scikit-learn, PCA is implemented as a transformer object that learns n number of components through the fit method, and can be used on new data to project it onto these components. In scikit-learn, we have various classes that implement different kinds of PCA decompositions, such as PCA, ProbabilisticPCA, RandomizedPCA, and KernelPCA. If you need a detailed description of each, please refer to the scikit-learn documentation. In our case, we will work with the PCA class from the sklearn.decomposition module. The most important parameter we can change is n_components, which allows us to specify the number of features that the obtained instances will have. In our case, we want to transform instances of 64 features to instances of just ten features, so we will set n_components of 10.

Now we perform the transformation and plot the results:

>>> from sklearn.decomposition import PCA
>>> estimator = PCA(n_components=10)
>>> X_pca = estimator.fit_transform(X_digits)
>>> plot_pca_scatter()

The plotted results can be seen in the following diagram:

Principal Component Analysis

From the preceding figure, we can draw a few interesting conclusions:

  • We can view the 10 different classes corresponding to the 10 digits at first sight. We see that for most classes, their instances are clearly grouped in clusters according to their target class, and also that the clusters are relatively distinct. The exception is the class corresponding to the digit 5 with instances very sparsely distributed over the plane overlap with the other classes.
  • At the other extreme, the class corresponding to the digit 0 is the most separated cluster. Intuitively, this class may be the one that is easiest to separate from the rest; that is, if we train a classifier, it should be the class with the best evaluation figures.
  • Also, for topological distribution, we may predict that contiguous classes correspond to similar digits, which means they will be the most difficult to separate. For example, the clusters corresponding to digits 9 and 3 appear contiguous (which will be expected as their graphical representations are similar), so it might be more difficult to separate a 9 from a 3 than a 9 from a 4, which is on the left-hand side, far from these clusters.

Notice that we quickly got a graph that gave us a lot of insight into the problem. This technique may be used before training a supervised classifier in order to better understand the difficulties we may encounter. With this knowledge, we may plan better feature preprocessing, feature selection, select a more suitable learning model, and so on. As we mentioned before, it can also be used to perform dimension reduction to avoid the curse of dimensionality and also may allow us to use simpler learning methods, such as linear models.

To finish, let us look at principal component transformations. We will take the principal components from the estimator by accessing the components attribute. Each of its components is a matrix that is used to transform a vector from the original space to the transformed space. In the scatter we previously plotted, we only took into account the first two components.

We will plot all the components in the same shape as the original data (digits).

>>> def print_pca_components(images, n_col, n_row):
>>>     plt.figure(figsize=(2. * n_col, 2.26 * n_row))
>>>     for i, comp in enumerate(images):
>>>         plt.subplot(n_row, n_col, i + 1)
>>>         plt.imshow(comp.reshape((8, 8)), 
              interpolation='nearest')
>>>         plt.text(0, -1, str(i + 1) + '-component')
>>>         plt.xticks(())
>>>         plt.yticks(())

>>>      n_components = n_row * n_col 
>>>     print_pca_components(estimator.components_[:n_components], n_col, n_row)

The components can be seen as follows:

Principal Component Analysis

By taking a look at the first two components in the preceding figure, we can draw a few interesting observations:

  • If you look at the second component, you can see that it mostly highlights the central region of the image. The digit class that is most affected by this pattern is 0, since its central region is empty. This intuition is confirmed by looking at our previous scatter plot. If you look at the cluster corresponding to the digit 0, you can see it is the one that has the lower values for the second component.
  • Regarding the first component, as we see in the scatter plot, it is very useful to separate the clusters corresponding to the digit 4 (extreme left, low value) and the digit 3 (extreme right, high value). If you see the first component plot, it agrees with this observation. You can see that the regions corresponding to the zone are very similar to the digit 3, while it has color in the zones that are characteristic of the digit 4.

If we used additional components, we will get more characteristics to be able to separate the classes into new dimensions. For example, we could add the third principal component and try to plot our instances in a tridimensional scatter plot.

In the next section, we will show another unsupervised group of methods: clustering algorithms. Like dimensionality-reduction algorithms, clustering does not need to know a target class. However, clustering methods try to group instances, looking for those that are (in some way) similar. We will see, however, that clustering methods, like supervised methods, can use PCA to better visualize and analyze their results.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset