Gower and PAM

As you conduct clustering analysis in real life, one of the things that can quickly become apparent is the fact that neither hierarchical nor k-means is specifically designed to handle mixed datasets. By mixed data, I mean both quantitative and qualitative or, more specifically, nominal, ordinal, and interval/ratio data.

The reality of most datasets that you will use is that they will probably contain mixed data. There are a number of ways to handle this, such as doing principal components analysis (PCA) first in order to create latent variables, then using them as input in clustering or using different dissimilarity calculations. We will discuss PCA in the next chapter.

With the power and simplicity of R, you can use the Gower dissimilarity coefficient to turn mixed data to the proper feature space. In this method, you can even include factors as input variables. Additionally, instead of k-means, I recommend using the PAM clustering algorithm.

PAM is very similar to k-means but offers a couple of advantages. They are listed as follows:

  1. First, PAM accepts a dissimilarity matrix, which allows the inclusion of mixed data
  2. Second, it is more robust to outliers and skewed data because it minimizes a sum of dissimilarities, instead of a sum of squared Euclidean distances (Reynolds, 1992)

This is not to say that you must use Gower and PAM together. If you choose, you can use the Gower coefficients with hierarchical, and I've seen arguments for and against using it in the context of k-means. Additionally, PAM can accept other linkages. However, when paired, they make an effective method to handle mixed data. Let's take a quick look at both of these concepts before moving on.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset