Weights and biases form an integral part of any deep neural network optimization and here we define a couple of functions to automate these initializations. It is a good practice to initialize weights with small noise to break symmetry and prevent zero gradients. Additionally, a small positive initial bias would avoid inactivated neurons, suitable for ReLU activation neurons.