Clustering handwritten digits with k-means

K-means is the most popular clustering algorithm, because it is very simple and easy to implement and it has shown good performance on different tasks. It belongs to the class of partition algorithms that simultaneously partition data points into distinct groups called clusters. An alternative group of methods, which we will not cover in this book, are hierarchical clustering algorithms. These find an initial set of clusters and divide or merge them to form new ones.

The main idea behind k-means is to find a partition of data points such that the squared distance between the cluster mean and each point in the cluster is minimized. Note that this method assumes that you know a priori the number of clusters your data should be divided into.

We will show in this section how k-means works using a motivating example, the problem of clustering handwritten digits. So, let us first import our dataset into our Python environment and show how handwritten digits look (we will use a slightly different version of the print_digits function we introduced in the previous section).

>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>>
>>> from sklearn.datasets import load_digits
>>> from sklearn.preprocessing import scale
>>> digits = load_digits()
>>> data = scale(digits.data)
>>>
>>> def print_digits(images,y,max_n=10):
>>>     # set up the figure size in inches
>>>     fig = plt.figure(figsize=(12, 12))
>>>     fig.subplots_adjust(left=0, right=1, bottom=0, top=1,
          hspace=0.05, wspace=0.05)
>>>     i = 0
>>>     while i <max_n and i <images.shape[0]:
>>>         # plot the images in a matrix of 20x20
>>>         p = fig.add_subplot(20, 20, i + 1, xticks=[],
              yticks=[])
>>>         p.imshow(images[i], cmap=plt.cm.bone)
>>>         # label the image with the target value
>>>         p.text(0, 14, str(y[i]))
>>>         i = i + 1
>>>
>>> print_digits(digits.images, digits.target, max_n=10)

The print digits can be seen in the following:

Clustering handwritten digits with k-means

You can see that the dataset contains the corresponding number associated as a target class, but since we are clustering we will not use this information until evaluation time. We will just see if we can group the figures based on their similarity, and form the ten clusters we can expect.

As usual, we must separate train and testing sets as follows:

>>> from sklearn.cross_validation import train_test_split
>>> X_train, X_test, y_train, y_test, images_train, 
   images_test = train_test_split(
        data, digits.target, digits.images,  test_size=0.25, 
          random_state=42)
>>>
>>> n_samples, n_features = X_train.shape
>>> n_digits = len(np.unique(y_train))
>>> labels = y_train

Once we have our training set, we are ready to cluster instances. What the k-means algorithm does is:

  1. Select an initial set of cluster centers at random.
  2. Find the nearest cluster center for each data point, and assign the data point closest to that cluster.
  3. Compute the new cluster centers, averaging the values of the cluster data points, and repeat until cluster membership stabilizes; that is, until a few data points change their clusters after each iteration.

Because of how k-means works, it can converge to local minima, and the initial set of cluster centers could greatly affect the clusters found. The usual approach to mitigate this is to try several initial sets and select the set with minimal value for the sum of squared distances between cluster centers (or inertia). The implementation of k-means in scikit-learn already does this (the n-init parameter allows us to establish how many different centroid configurations the algorithm will try). It also allows us to specify that the initial centroids will be sufficiently separated, leading to better results. Let's see how this works on our dataset.

>>> from sklearn import cluster
>>> clf = Cluster.KMeans(init='k-means++',
    n_clusters=10, random_state=42)
>>> clf.fit(X_train)

The procedure is similar to the one used for supervised learning, but note that the fit method only takes the training data as an argument. Also observe that we need to specify the number of clusters. We can perceive this number because we know that clusters represent numbers.

If we print the value of the labels_ attribute of the classifier, we get a list of the cluster numbers associated to each training instance.

>>> print_digits(images_train, clf.labels_, max_n=10)

The cluster can be seen in the following diagram:

Clustering handwritten digits with k-means

Note that the cluster number has nothing to do with the real number value. Remember that we have not used the class to classify; we only grouped images by similarity. Let's see how our algorithm behaves on the testing data.

To predict the clusters for training data, we use the usual predict method of the classifier.

>>> y_pred=clf.predict(X_test)

Let us see how clusters look:

>>> def print_cluster(images, y_pred, cluster_number):
>>>      images = images[y_pred==cluster_number]
>>>      y_pred = y_pred[y_pred==cluster_number]
>>>      print_digits(images, y_pred,max_n=10)
>>> for i in range(10):
>>>      print_cluster(images_test, y_pred, i)

This code shows ten images from each cluster. Some clusters are very clear, as shown in the following figure:

Clustering handwritten digits with k-means

Cluster number 2 corresponds to zeros. What about cluster number 7?

Clustering handwritten digits with k-means

It is not so clear. It seems cluster 7 is something like drawn numbers that look similar to the digit nine. Cluster number 9 only has six instances, as shown in the following figure:

Clustering handwritten digits with k-means

It must be clear after reading that we are not classifying images here (as in the face examples in the previous chapter). We are grouping into ten classes (you can try changing the number of clusters and see what happens).

How can we evaluate our performance? Precision and all that stuff does not work, since we have no target classes to compare with. To evaluate, we need to know the "real" clusters, whatever that means. We can suppose, for our example, that each cluster includes every drawing of a certain number, and only that number. Knowing this, we can compute the adjusted Rand index between our cluster assignment and the expected one. The Rand index is a similar measure for accuracy, but it takes into account the fact that classes can have different names in both assignments. That is, if we change class names, the index does not change. The adjusted index tries to deduct from the result coincidences that have occurred by chance. When you have the exact same clusters in both sets, the Rand index equals one, while it equals zero when there are no clusters sharing a data point.

>>> from sklearn import metrics
>>> print "Adjusted rand score: 
    {:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
Adjusted rand score:0.57

We can also print the confusion matrix as follows:

>>> print metrics.confusion_matrix(y_test, y_pred)
[[ 0  0 43  0  0  0  0  0  0  0]
 [20  0  0  7  0  0  0 10  0  0]
 [ 5  0  0 31  0  0  0  1  1  0]
 [ 1  0  0  1  0  1  4  0 39  0]
 [ 1 50  0  0  0  0  1  2  0  1]
 [ 1  0  0  0  1 41  0  0 16  0]
 [ 0  0  1  0 44  0  0  0  0  0]
 [ 0  0  0  0  0  1 34  1  0  5]
 [21  0  0  0  0  3  1  2 11  0]
 [ 0  0  0  0  0  2  3  3 40  0]]

Observe that the class 0 in the test set (which coincides with number 0 drawings) is completely assigned to the cluster number 2. We have problems with number 8: 21 instances are assigned class 0, while 11 are assigned class 8, and so on. Not so good after all.

If we want to graphically show how k-means clusters look like, we must plot them on a two-dimensional plane. We have learned how to do that in the previous section: Principal Component Analysis (PCA). Let's construct a meshgrid of points (after dimensionality reduction), calculate their assigned cluster, and plot them.

Note

This example is taken from the very nice scikit-learn tutorial at http://scikit-learn.org/.

>>> from sklearn import decomposition
>>> pca = decomposition.PCA(n_components=2).fit(X_train)
>>> reduced_X_train = pca.transform(X_train)
>>> # Step size of the mesh. 
>>> h = .01     
>>> # point in the mesh [x_min, m_max]x[y_min, y_max].
>>> x_min, x_max = reduced_X_train[:, 0].min() + 1, 
    reduced_X_train[:, 0].max() - 1
>>> y_min, y_max = reduced_X_train[:, 1].min() + 1, 
    reduced_X_train[:, 1].max() - 1
>>> xx, yy = np.meshgrid(np.arange(x_min, x_max, h), 
    np.arange(y_min, y_max, h))
>>> kmeans = cluster.KMeans(init='k-means++', n_clusters=n_digits, 
    n_init=10)
>>> kmeans.fit(reduced_X_train)
>>> Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
>>> # Put the result into a color plot
>>> Z = Z.reshape(xx.shape)
>>> plt.figure(1)
>>> plt.clf()
>>> plt.imshow(Z, interpolation='nearest', 
    extent=(xx.min(), xx.max(), yy.min(), 
    yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower')
>>> plt.plot(reduced_X_train[:, 0], reduced_X_train[:, 1], 'k.', 
    markersize=2)
>>> # Plot the centroids as a white X
>>> centroids = kmeans.cluster_centers_
>>> plt.scatter(centroids[:, 0], centroids[:, 1],marker='.', 
    s=169, linewidths=3, color='w', zorder=10)
>>> plt.title('K-means clustering on the digits dataset (PCA 
    reduced data)
Centroids are marked with white dots')
>>> plt.xlim(x_min, x_max)
>>> plt.ylim(y_min, y_max)
>>> plt.xticks(())
>>> plt.yticks(())
>>> plt.show()

The k-means clustering on the digits dataset can be seen in the following diagram:

Clustering handwritten digits with k-means
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset