Examples of learning algorithms

Let's now merge the theoretical content presented so far together into simple examples of learning algorithms. In this chapter, we are going to explore two neural architectures: perceptron and adaline. Both are very simple, containing only one layer.

Perceptron

The perceptrons learn by taking into account only the error between the target and the output, and the learning rate. The update rule is as follows:

Perceptron

Where wi is the weight connecting the ith input to the neuron, t[k] is the target output for the kth sample, y[k] is the result of the neural network for the kth sample, xi[k] is the ith input for the kth sample, and η is the learning rate. It can be seen that this rule is very simplistic and does not consider the perceptron nonlinearities present in the activation function; it just goes in the opposite direction of the error in the naïve hope that this would take the network close to the objective.

Delta rule

A better algorithm based on the gradient descent method was developed to consider nonlinearity as well as its derivative. What this algorithm has in addition to the perceptron rule is the derivative of the activation function g(h), with h being the weighted sum of all the neuron inputs before passing them to the activation function. So, the update rule is as follows:

Delta rule
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset