Weight and bias initialization

Initializing the weight and biases for the hidden layers is an important hyperparameter to be taken care of:

  • Do not do all-zero initialization: A reasonable-sounding idea might be to set all the initial weights to zero, but it does not work in practice because if every neuron in the network computes the same output, there will be no source of asymmetry between neurons as their weights are initialized to be the same.
  • Small random numbers: It is also possible to initialize the weights of the neurons to small numbers but not identically zero. Alternatively, it is also possible to use small numbers drawn from a uniform distribution.
  • Initializing the biases: It is possible, and common, to initialize the biases to zero since the asymmetry breaking is provided by small random numbers in the weights. Setting the biases to a small constant value, such as 0.01 for all biases, ensures that all ReLU units can propagate a gradient. However, it neither performs well nor does consistent improvement. Therefore, sticking to zero is recommended.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset