Summary

In the previous chapters, we discussed learning the parameters, as well as the structures, of a Bayesian model using just the data samples. In this chapter, we discussed the same situations, but in the context of a Markov model. Firstly, we discussed a very famous technique of parameter estimation, maximum likelihood estimation. We saw that in Markov models, even the maximum likelihood estimate in the case of a simple model could be computationally expensive, and in some cases, it could also be intractable. This motivated us to find alternatives, such as using approximate inference algorithms to compute the gradient or using a different likelihood. We showed that learning with belief propagation can be reformulated as optimizing inference and learning simultaneously. Then, we discussed the problem of learning the structure from the data using the same two techniques, maximum likelihood and Bayesian learning.

In the next chapter, we will discuss some of the most commonly used special cases of the Bayesian and Markov networks, such as Naive Bayes and dynamic Bayesian networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset