Visual activation function plotting

When dealing with activation functions, it is important that you visually understand what an activation function looks like before you use it. We are going to plot, and then benchmark, several activation functions for you to see:

This is what the logistic steep approximation and Swish activation function look like when they are plotted individually. As there are many types of activation function, the following shows what all our activation functions are going to look like when they are plotted together:

Note: You can download the program that produces the previous output from the SharpNeat project on GitHub https://github.com/colgreen/sharpneat.

At this point, you may be wondering why we even care what the plots look like—great point. We care because you are going to be using these quite a bit once you progress to hands-on experience, as you dive deeper into neural networks. It's very handy to be able to know whether your activation function will place the value of your neuron in the on or off state, and what range it will keep or need the values in. You will no doubt encounter and/or use activation functions in your career as a machine-learning developer, and knowing the difference between a Tanh and a LeakyRelu activation function is very important.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset