Clustering ensembles

In Chapter 3, Predicting Sports Winners with Decision Trees, we looked at a classification ensemble using the random forest algorithm, which is an ensemble of many low-quality, tree-based classifiers. Ensembling can also be performed using clustering algorithms. One of the key reasons for doing this is to smooth the results from many runs of an algorithm. As we saw before, the results from running k-means are varied, depending on the selection of the initial centroids. Variation can be reduced by running the algorithm many times and then combining the results.

Ensembling also reduces the effects of choosing parameters on the final result. Most clustering algorithms are quite sensitive to the parameter values chosen for the algorithm. Choosing slightly different parameters results in different clusters.

Evidence accumulation

As a basic ensemble, we can first cluster the data many times and record the labels from each run. We then record how many times each pair of samples was clustered together in a new matrix. This is the essence of the Evidence Accumulation Clustering (EAC) algorithm.

EAC has two major steps. The first step is to cluster the data many times using a lower-level clustering algorithm such as k-means and record the frequency that samples were in the same cluster, in each iteration. This is stored in a coassociation matrix. The second step is to perform a cluster analysis on the resulting coassociation matrix, which is performed using another type of clustering algorithm called hierarchical clustering. This has an interesting property, as it is mathematically the same as finding a tree that links all the nodes together and removing weak links.

We can create a coassociation matrix from an array of labels by iterating over each of the labels and recording where two samples have the same label. We use SciPy's csr_matrix, which is a type of sparse matrix:

from scipy.sparse import csr_matrix

Our function definition takes a set of labels:

def create_coassociation_matrix(labels):

We then record the rows and columns of each match. We do these in a list. Sparse matrices are commonly just sets of lists recording the positions of nonzero values, and csr_matrix is an example of this type of sparse matrix:

    rows = []
    cols = []

We then iterate over each of the individual labels:

    unique_labels = set(labels)
    for label in unique_labels:

We look for all samples that have this label:

        indices = np.where(labels == label)[0]

For each pair of samples with the preceding label, we record the position of both samples in our list. The code is as follows:

        for index1 in indices:
            for index2 in indices:
                rows.append(index1)
                cols.append(index2)

Outside all loops, we then create the data, which is simply the value 1 for every time two samples were listed together. We get the number of 1 to place by noting how many matches we had in our labels set altogether. The code is as follows:

    data = np.ones((len(rows),))
    return csr_matrix((data, (rows, cols)), dtype='float')

To get the coassociation matrix from the labels, we simply call this function:

C = create_coassociation_matrix(labels)

From here, we can add multiple instances of these matrices together. This allows us to combine the results from multiple runs of k-means. Printing out C (just enter C into a new cell and run it) will tell you how many cells have nonzero values in them. In my case, about half of the cells had values in them, as my clustering result had a large cluster (the more even the clusters, the lower the number of nonzero values).

The next step involves the hierarchical clustering of the coassociation matrix. We will do this by finding minimum spanning trees on this matrix and removing edges with a weight lower than a given threshold.

In graph theory, a spanning tree is a set of edges on a graph that connects all of the nodes together. The Minimum Spanning Tree (MST) is simply the spanning tree with the lowest total weight. For our application, the nodes in our graph are samples from our dataset, and the edge weights are the number of times those two samples were clustered together—that is, the value from our coassociation matrix.

In the following figure, a MST on a graph of six nodes is shown. Nodes on the graph can be used more than once in the MST. The only criterion for a spanning tree is that all nodes should be connected together.

Evidence accumulation

To compute the MST, we use SciPy's minimum_spanning_tree function, which is found in the sparse package:

from scipy.sparse.csgraph import minimum_spanning_tree

The mst function can be called directly on the sparse matrix returned by our coassociation function:

mst = minimum_spanning_tree(C)

However, in our coassociation matrix C, higher values are indicative of samples that are clustered together more often—a similarity value. In contrast, minimum_spanning_tree sees the input as a distance, with higher scores penalized. For this reason, we compute the minimum spanning tree on the negation of the coassociation matrix instead:

mst = minimum_spanning_tree(-C)

The result from the preceding function is a matrix the same size as the coassociation matrix (the number of rows and columns is the same as the number of samples in our dataset), with only the edges in the MST kept and all others removed.

We then remove any node with a weight less than a predefined threshold. To do this, we iterate over the edges in the MST matrix, removing any that are less than a specific value. We can't test this out with just a single iteration in a coassociation matrix (the values will be either 1 or 0, so there isn't much to work with). So, we will create extra labels first, create the coassociation matrix, and then add the two matrices together. The code is as follows:

pipeline.fit(documents)
labels2 = pipeline.predict(documents)
C2 = create_coassociation_matrix(labels2)
C_sum = (C + C2) / 2

We then compute the MST and remove any edge that didn't occur in both of these labels:

mst = minimum_spanning_tree(-C_sum)
mst.data[mst.data > -1] = 0

The threshold we wanted to cut off was any edge not in both clusterings—that is, with a value of 1. However, as we negated the coassociation matrix, we had to negate the threshold value too.

Lastly, we find all of the connected components, which is simply a way to find all of the samples that are still connected by edges after we removed the edges with low weights. The first returned value is the number of connected components (that is, the number of clusters) and the second is the labels for each sample. The code is as follows:

from scipy.sparse.csgraph import connected_components
number_of_clusters, labels = connected_components(mst)

In my dataset, I obtained eight clusters, with the clusters being approximately the same as before. This is hardly a surprise, given we only used two iterations of k-means; using more iterations of k-means (as we do in the next section) will result in more variance.

How it works

In the k-means algorithm, each feature is used without any regard to its weight. In essence, all features are assumed to be on the same scale. We saw the problems with not scaling features in Chapter 2, Classifying with scikit-learn Estimators. The result of this is that k-means is looking for circular clusters, as shown in the following screenshot:

How it works

As we can see in the preceding screenshot, not all clusters have this shape. The blue cluster is circular and is of the type that k-means is very good at picking up. The red cluster is an ellipse. The k-means algorithm can pick up clusters of this shape with some feature scaling. The third cluster isn't even convex—it is an odd shape that k-means will have trouble discovering.

The EAC algorithm works by remapping the features onto a new space, in essence turning each run of the k-means algorithm into a transformer using the same principles we saw the previous section using k-means for feature reduction. In this case, though, we only use the actual label and not the distance to each centroid. This is the data that is recorded in the co-association matrix.

The result is that EAC now only cares about how close things are to each other, not how they are placed in the original feature space. There are still issues around unscaled features. Feature scaling is important and should be done anyway (we did it using tf-idf in this chapter, which results in feature values having the same scale).

We saw a similar type of transformation in Chapter 9, Authorship Attribution, through the use of kernels in SVMs. These transformations are very powerful and should be kept in mind for complex datasets.

Implementation

Putting all this altogether, we can now create a simply clustering algorithm fitting the scikit-learn interface that performs all of the steps in EAC. First, we create the basic structure of the class using scikit-learn's ClusterMixin:

from sklearn.base import BaseEstimator, ClusterMixin
class EAC(BaseEstimator, ClusterMixin):

Our parameters are the number of k-means clusterings to perform in the first step (to create the coassociation matrix), the threshold to cut off at, and the number of clusters to find in each k-means clustering. We set a range of n_clusters in order to get lots of variance in our k-means iterations. Generally, in ensemble terms, variance is a good thing; without it, the solution can be no better than the individual clusterings (that said, high variance is not an indicator that the ensemble will be better). The code is as follows:

    def __init__(self, n_clusterings=10, cut_threshold=0.5, n_clusters_range=(3, 10)):
        self.n_clusterings = n_clusterings
        self.cut_threshold = cut_threshold
        self.n_clusters_range = n_clusters_range

Next up is the fit function for our EAC class:

    def fit(self, X, y=None):

We then perform our low-level clustering using k-means and sum the resulting coassociation matrices from each iteration. We do this in a generator to save memory, creating only the coassociation matrices when we need them. In each iteration of this generator, we create a new single k-means run with our dataset and then create the coassociation matrix for it. We use sum to add these together. The code is as follows:

        C = sum((create_coassociation_matrix(self._single_clustering(X))
                 for i in range(self.n_clusterings)))

As before, we create the MST, remove any edges less than the given threshold (properly negating values as explained earlier), and find the connected components. As with any fit function in scikit-learn, we need to return self in order for the class to work in pipelines effectively. The code is as follows:

        mst = minimum_spanning_tree(-C)
        mst.data[mst.data > -self.cut_threshold] = 0
        self.n_components, self.labels_ = connected_components(mst)
        return self

We then write the function to cluster a single iteration. To do this, we randomly choose a number of clusters to find using NumPy's randint function and our n_clusters_range parameter, which sets the range of possible values. We then cluster and predict the dataset using k-means. The return value here will be the labels coming from k-means. The code is as follows:

    def _single_clustering(self, X):
        n_clusters = np.random.randint(*self.n_clusters_range)
        km = KMeans(n_clusters=n_clusters)
        return km.fit_predict(X)

We can now run this on our previous code by setting up a pipeline as before and using EAC where we previously used a KMeans instance as our final stage of the pipeline. The code is as follows:

pipeline = Pipeline([('feature_extraction', TfidfVectorizer(max_df=0.4)),
                     ('clusterer', EAC())
                     ])
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset