Using functions to initialize weights and biases

Weights and biases form an integral part of any deep neural network optimization and here we define a couple of functions to automate these initializations. It is a good practice to initialize weights with small noise to break symmetry and prevent zero gradients. Additionally, a small positive initial bias would avoid inactivated neurons, suitable for ReLU activation neurons.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset