Understanding activation functions

An activation function is added to the output end of a neural network to determine the output. It usually will map the resultant values somewhere in the range of -1 to 1, depending upon the function. It is ultimately used to determine whether a neuron will fire or activate, as in a light bulb going on or off.

The activation function is the last piece of the network before the output and could be considered the supplier of the output value. There are many kinds of activation function that can be used, and this diagram highlights just a very small subset of these:

There are two types of activation function—linear and non-linear:

  • Linear: A linear function is that which is on, or nearly on, a straight line, as depicted here:

  • Non-linear: A non-linear function is that which is not on a straight line, as depicted here:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset