Finding the most probable assignment

In the previous section, we computed the maximum unnormalized probability value, but for MAP, we need to compute the states of the variables corresponding to the one in which this value occurs. Taking our earlier example of the network A B, we first computed Finding the most probable assignment, but the state of the variable B for which P(b|a) gives the maximum value also depends on the state of the variable A. So, we will first need to compute Finding the most probable assignment and then compute the state of B accordingly. So, from the CPDs of the network, we can see that Finding the most probable assignment. Now we will look for Finding the most probable assignment, which gives us Finding the most probable assignment. Hence, we get the maximum values corresponding to Finding the most probable assignment and Finding the most probable assignment.

Also, the computational cost of this operation is not high, as we are simply doing another pass over the factors that have already been computed. Hence, the cost would be linear in the number of variables in the network.

Now, let's continue the previous code example and do some map queries over the networks using pgmpy:

In [20]: model_inference.map_query(variables=['late_for_school'])
Out[20]: {'late_for_school': 0}
In [21]: model_inference.map_query(variables=['late_for_school',
                                              'accident'])
Out[21]: {'accident': 1, 'late_for_school': 0}

# Again we can pass the evidence to the query using the evidence 
# argument in the form of {variable: state}.
In [22]: model_inference.map_query(variables=['late_for_school'],                  
                                      evidence={'accident': 1})
Out[22]: {'late_for_school': 0}
In [23]: model_inference.map_query(variables=['late_for_school'],
                                   evidence={'accident': 1, 
                                             'rain': 1})
Out[23]: {'late_for_school': 0}

# Also in the case of MAP queries we can specify the elimination 
# order of the variables. But if the elimination order is not 
# specified pgmpy automatically computes the best elimination 
# order for the query.
In [24]: model_inference.map_query(
                      variables=['late_for_school'],
                      elimination_order=['accident', 'rain',                  
                                         'traffic_jam',
                                         'getting_up_late', 
                                         'long_queues'])
Out[24]: {'late_for_school': 0}
In [25]: model_inference.map_query(
                       variables=['late_for_school'], 
                    evidence={'accident': 1},
                    elimination_order=['rain',
                                       'traffic_jam', 
                                       'getting_up_late',
                                       'long_queues'])
Out[25]: {'late_for_school': 0}

# Similarly MAP queries can be done for belief propagation as well.
In [26]: belief_propagation.map_query(['late_for_school'],                         
                                      evidence={'accident': 1})
Out[26]: {'late_for_school': 0}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset