Visual activation function plotting

Here is a plot of local and global minimum being plotted from a custom version of SharpNEAT. It is absolutely amazing what you can do with this product in the realm of neural networks and advanced machine learning!

As I mentioned, we are going to plot and then benchmark several activation functions. We hear this term activation functions everywhere, but do we really know what it means? Let's start by giving a quick explanation just in case you are not familiar.

An activation function is used to decide whether a neuron has been activated or not. Some people like to replace the word activated with fired. Whatever flips your pickle! Either way, it's what finally determines whether something is on or off, fired or not, activated or not.

Let's start by showing you what a plot of a single activation function looks like:

This is what the Logistic Steep approximation and Swish activation function look like when they are plotted individually, as there are many types of activation functions, here's what all of our activation functions are going to look like when they are plotted together:

At this point you may be wondering, Why do we even care what the plots look like? Great question. We care because you are going to use these quite a bit once you progress into neural networks and beyond. It's very handy to be able to know whether your activation function will place the value of your neuron in on or off state, and what range it will keep or need the values in. You will no doubt encounter and/or use activation functions in your career as a machine learning developer, and knowing the difference between a TanH and a LeakyReLU activation function is very important.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset