In this chapter, the reader will be presented with techniques that help to optimize neural networks, thereby favoring its best performance. Tasks such as input selection, dataset separation and filtering, and choice of the number of hidden neurons are examples of what can be adjusted to improve a neural network's performance. Furthermore, this chapter focuses on methods for adapting neural networks to real-time data. Two implementations of these techniques are presented here. Application problems will be selected for exercises. This chapter deals with the following:
When developing a neural network application, it is quite common to face problems regarding how accurate the results are. The source of these problems can be various:
The design of a neural network application sometimes requires a lot of patience and trial-and-error methods. There is no methodology stating specifically the number of hidden units and/or which architecture should be used, but there are recommendations on how to properly choose these parameters. Another issue that programmers may face is a long training time, which often causes the neural network to not learn the data. No matter how long the training runs, the neural network won't converge.
On the other hand, one may wish to improve the results. A neural network can learn until the learning algorithm reaches the stop condition, either the number of epochs or the mean squared error. Even so, sometimes, the results are either inaccurate or not generalized. This will require a redesign of the neural structure as well as the dataset.