How to do it...

This section covers type activation functions in multilayer perceptrons. Activation is one of the critical component of ANN as it defines the output of that node based on the given input. There are many different activation functions used while building a neural network:

  • Sigmoid: The sigmoid activation function is a continuous function also known as a logistic function and has the form, 1/(1+exp(-x)). The sigmoid function has a tendency to zero out the backpropagation terms during training leading to saturation in response. In TensorFlow, the sigmoid activation function is defined using the tf.nn.sigmoid function.
  • ReLU: Rectified linear unit (ReLU) is one of the most famous continuous, but not smooth, activation functions used in neural networks to capture non-linearity. The ReLU function is defined as max(0,x). In TensorFlow, the ReLU activation function is defined as tf.nn.relu.
  • ReLU6: It caps the ReLU function at 6 and is defined as min(max(0,x), 6), thus the value does not become very small or large. The function is defined in TensorFlow as tf.nn.relu6.
  • tanh: Hypertangent is another smooth function used as an activation function in neural networks and is bound [ -1 to 1] and implemented as tf.nn.tanh.
  • softplus: It is a continuous version of ReLU, so the differential exists and is defined as log(exp(x)+1). In TensorFlow the softplus is defined as tf.nn.softplus.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset