Algorithms to compute optimal policy using dynamic programming

Standard algorithms to compute optimal policies for MDP utilizing Dynamic Programming are as follows, and we will be covering both in detail in later sections of this chapter:

  • Value Iteration algorithm: An iterative algorithm, in which state values are iterated until it reaches optimal values; and, subsequently, optimum values are utilized to determine the optimal policy
  • Policy Iteration algorithm: An iterative algorithm, in which policy evaluation and policy improvements are utilized alternatively to reach optimal policy

Value Iteration algorithm: Value Iteration algorithms are easy to compute for the very reason of applying iteratively on only state values. First, we will compute the optimal value function V*, then plug those values into the optimal policy equation to determine the optimal policy. Just to give the size of the problem, for 11 possible states, each state can have four policies (N-north, S-south, E-east, W-west), which gives an overall 114 possible policies. The value iteration algorithm consists of the following steps:

  1. Initialize V(S) = 0 for all states S
  2. For every S, update:
  1. By repeatedly computing step 2, we will eventually converge to optimal values for all the states:

There are two ways of updating the values in step 2 of the algorithm

  • Synchronous update - By performing synchronous update (or Bellman backup operator) we will perform RHS computing and substitute LHS of the equation represented as follows:
  • Asynchronous update - Update the values of the states one at a time rather than updating all the states at the same time, in which states will be updated in a fixed order (update state number 1, followed by 2, and so on.). During convergence, asynchronous updates are a little faster than synchronous updates.

Illustration of value iteration on grid world example: The application of the Value iteration on a grid world is explained in the following image, and the complete code for solving a real problem is provided at the end of this section. After applying the previous value iteration algorithm on MDP using Bellman equations, we've obtained the following optimal values V* for all the states (Gamma value chosen as 0.99):

When we plug these values in to our policy equation, we obtain the following policy grid:

Here, at position (3,1) we would like to prove mathematically why an optimal policy suggests taking going left (west) rather than moving up (north):

Due to the wall, whenever the robot tries to move towards South (downwards side), it will remain in the same place, hence we assigned the value of the current position 0.71 for a probability of 0.1.

Similarly, for north, we calculated the total payoff as follows:

So, it would be optimal to move towards the west rather than north, and therefore the optimal policy is chosen to do so.

Policy Iteration Algorithm: Policy iterations are another way of obtaining optimal policies for MDP in which policy evaluation and policy improvement algorithms are applied iteratively until the solution converges to the optimal policy. Policy Iteration Algorithm consists of the following steps:

  1. Initialize random policy π
  2. Repeatedly do the following until convergence happens
    • Solve Bellman equations for the current policy for obtaining Vπ for using system of linear equations:
    • Update the policy as per the new value function to improve the policy by pretending the new value is an optimal value using argmax formulae:
  1. By repeating these steps, both value and policy will converge to optimal values:

Policy iterations tend to do well with smaller problems. If an MDP has an enormous number of states, policy iterations will be computationally expensive. As a result, large MDPs tend to use value iterations rather than policy iterations.

What if we don't know exact state transition probabilities in real life examples Ps,a ?

We need to estimate the probabilities from the data by using the following simple formulae:

If for some states no data is available, which leads to 0/0 problem, we can take a default probability from uniform distributions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset