Performance considerations

The three unsupervised learning techniques share the same limitation—a high computational complexity.

K-means

The K-means has the computational complexity of O(iKnm), where i is the number of iterations, K the number of clusters, n the number of observations, and m the number of features. The algorithm can be improved through the use of other techniques by using the following techniques:

  • Reducing the average number of iterations by seeding the centroid using an algorithm such as initialization by ranking the variance of the initial cluster as described at the beginning of this chapter.
  • Using a parallel implementation of K-means and leveraging a large-scale framework such as Hadoop or Spark.
  • Reducing the number of outliers and possible features by filtering out the noise with a smoothing algorithm such as a discrete Fourier transform or a Kalman filter.
  • Decreasing the dimensions of the model by following a two-step process: a first pass with a smaller number of clusters K and/or a loose exit condition regarding the reassignment of data points. The data points close to each centroid are aggregated into a single observation. A second pass is then run on a smaller set of observations.

EM

The computational complexity of the expectation-maximization algorithm for each iteration (E + M steps) is O(m2n), where m is the number of hidden or latent variables and n is the number of observations.

A partial list of suggested performance improvement includes:

  • Filtering of raw data to remove noise and outliers
  • Using a sparse matrix on a large feature set to reduce the complexity of the covariance matrix, if possible
  • Applying the Gaussian mixture model (GMM) wherever possible: the assumption of Gaussian distribution simplifies the computation of the log likelihood
  • Using a parallel data processing framework such as Apache Hadoop or Spark as explained in the Apache Spark section in Chapter 12, Scalable Frameworks
  • Using a kernel method to reduce the estimate of covariance in the E-step

PCA

The computational complexity of the extraction of the principal components is O(m2n + n3), where m is the number of features and n the number of observations. The first term represents the computational complexity for computing the covariance matrix. The last term reflects the computational complexity of the eigenvalue decomposition.

The list of potential performance improvements or alternative solutions for PCA includes:

  • Assuming that the variance is Gaussian
  • Using a sparse matrix to compute eigenvalues for problems with large feature sets and missing data
  • Investigating alternatives to PCA to reduce the dimension of a model such as the discrete Fourier transform (DFT) or singular value decomposition (SVD) [4:16]
  • Using the PCA in conjunction with EM (a research)
  • Deploying a dataset on a parallel data processing framework such as Apache Spark or Hadoop as explained in the Apache Spark section in Chapter 12, Scalable Frameworks
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset