In this chapter, the reader will be presented with techniques that help to optimize neural networks, in order to get the best performance. Tasks such as input selection, dataset separation and filtering, choosing the number of hidden neurons, and cross-validation strategies are examples of what can be adjusted to improve a neural network's performance. Furthermore, this chapter focuses on methods for adapting neural networks to real-time data. Two implementations of these techniques are presented here. Application problems will be selected for exercises. This chapter deals with the following topics:
When developing a neural network application, it is quite common to face problems regarding how accurate the results are. The source of these problems can be various:
The design of a neural network application sometimes requires a lot of patience and the use of trial and error methods. There is no methodology stating specifically which number of hidden units and/or architecture should be used, but there are recommendations on how to choose these parameters properly. Another issue programmers may face is a long training time, which often causes the neural network to not learn the data. No matter how long the training runs, the neural network won't converge.
On the other hand, the neural network solution designer may wish to improve the results. Because a neural network can learn until the learning algorithm reaches the stop condition, the number of epochs or the mean squared error, the results are not accurate enough or not generalized. This will require a redesign of the neural structure, or a new dataset selection.