Summary

In this chapter, we had a closer look at modeling sequences of observations with hidden states with the two most commonly used algorithms:

  • Generative hidden Markov model (HMM) to maximize p(X,Y)
  • Discriminative conditional random field (CRF) to maximize log p(Y|X)

HMM is a special form of Bayes Network and requires the observations to be independent. Under these circumstances, the HMM is fairly easy to estimate, which is not the case for CRF.

You learned how to implement three dynamic programming techniques, Viterbi, Baum-Welch, and alpha/beta algorithms in Scala. These algorithms are routinely used to solve optimization problems and should be an essential component of your algorithmic toolbox.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset