Training neural networks

Training deep neural networks is hard, as there are several hyperparameters to be optimized. The variables that define the structure of the network and how it is trained are called hyperparameters. The number of hidden layers and the activation function to be used are a couple of examples of architecture-defining hyperparameters. Similarly, the learning rate and batch size of the training data are examples of training-related hyperparameters. The other main parameters are the network weights and biases that have to be obtained by training the input data. The mechanism or method of obtaining these parameters of the network is called training.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset