SARSA on-policy TD control

State-action-reward-state-action (SARSA) is an on-policy TD control problem, in which policy will be optimized using policy iteration (GPI), only time TD methods used for evaluation of predicted policy. In the first step, the algorithm learns a SARSA function. In particular, for an on-policy method we estimate qπ (s, a) for the current behavior policy π and for all states (s) and actions (a), using the TD method for learning vπ.

Now, we consider transitions from state-action pair to state-action pair, and learn the values of state-action pairs:

This update is done after every transition from a non-terminal state St. If St+1 is terminal, then Q (St+1, At+1) is defined as zero. This rule uses every element of the quintuple of events (St, At, Rt, St+1, At+1), which make up a transition from one state-action pair to the next. This quintuple gives rise to the name SARSA for the algorithm.

As in all on-policy methods, we continually estimate qπ for the behavior policy π, and at the same time change π toward greediness with respect to qπ. The algorithm for computation of SARSA is given as follows:

  1. Initialize:
  1. Repeat (for each episode):
    • Initialize S
    • Choose A from S using policy derived from Q (for example, ε- greedy)
    • Repeat (for each step of episode):
      • Take action A, observe R,S'
      • Choose A' from using S' policy derived from Q (for example, ε - greedy)
  2. Until S is terminal
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset