The back propagation algorithm

The backward propagation of errors, or simply backpropagation, is another somewhat common method for training artificial neural networks and it is used in combination with an optimization method (such as gradient descent, which is described later in this chapter).

The goal of backpropagation is to optimize the weights so that the neural network model can learn how to correctly map arbitrary inputs to outputs. In other words, when using back propagation, the initial system output is continually compared to the desired output, and the system is adjusted until the difference between the two is minimized.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset