Reinforcement versus supervised and unsupervised learning

Whereas supervised and unsupervised learning appear at opposite ends of the spectrum, RL exists somewhere in the middle. It is not supervised learning because the training data comes from the algorithm deciding between exploration and exploitation. In addition, it is not unsupervised because the algorithm receives feedback from the environment. As long as you are in a situation where performing an action in a state produces a reward, you can use RL to discover a good sequence of actions to take the maximum expected rewards.

The goal of an RL agent will be to maximize the total reward that it receives in the end. The third main subelement is the value function. While rewards determine an immediate desirability of the states, values indicate the long-term desirability of states, taking into account the states that may follow and the available rewards in these states. The value function is specified with respect to the chosen policy. During the learning phase, an agent tries actions that determine the states with the highest value, because these actions will get the best number of rewards in the end.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset