Early stopping in neural network training

The epoch is a measure of each round trip from the forward propagation training and backpropagation update of weights and biases. The round trip of training has to stop once we have convergence (minimal error terms) or after a preset number of iterations.

Early stopping is a technique used to deal with overfitting of the model (more on overfitting in the next few pages). The training set is separated into two parts: one of them is to be used for training, while the other one is meant for validation purposes. We had separated our IRIS dataset into two parts: one 75 percent and another 25 percent.

With the training data, we compute the gradient and update the network weights and biases. The second set of data, the testing or validation data, is used to validate the model overfitting. If the error during validation increases for a specified number of iterations (nnet.abstol/reltol), the training is stopped and the weights and biases at that point are used by the model. This method is called early stopping.

An early stopping neural network ensemble generalization error is comparable with an individual neural network of optimal architecture that is trained by a traditional algorithm. The individual neural network needs a complex and perfect tuning to attain this generalization without early stopping.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset