Early stopping

As the training for a large neural network proceeds, training errors decrease steadily over time, but as shown in the following figure, validation set errors starts to increase beyond some iterations:

Early stopping: training versus validation error

If the training is stopped at the point where the validation errors start increasing, we can have a model with better generalization performance. This is called early stopping. It's controlled by a patience hyperparameter, which sets the number of times to observe increasing validation set error before training is aborted. Early stopping can be used either alone or in conjunction with other regularization strategies. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset