Summary

In the second chapter, we introduced the fundamentals of inference and we saw the most important algorithms for computing posterior distribution: variable elimination and the junction tree algorithm. We learned how to build a graphical model by considering causality, temporal relationships, and by identifying patterns between variables. We saw a fundamental feature of probabilistic graphical models, which is the combination of graphs to build more complex models. And we learned how to perform inference with a junction tree algorithm in R and saw that the same junction tree can be used for any type of query, on both marginal and joint distribution. In the last section we saw several real-life examples of PGM that can be used in many applications and are usually good candidates for exact inference.

In this chapter, we faced a problem when defining a new graphical model: the parameters are tedious to determine. In fact, even on small examples it's complicated. In the next chapter we will learn how to find parameters automatically from a dataset. We will introduce the EM (Expectation Maximization) algorithm and experiment with a complex problem: learning the structure of the graph itself. We will see that inference is the most important sub-routine of all learning algorithms, hence the necessity for having efficient algorithms such as the junction tree algorithm.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset