Summary

In this chapter, we saw how to develop a neural network model that helps to solve a classification type of problem. We started with a simple classification model and explored how to change the number of hidden layers and the number of units in the hidden layers. The idea behind exploring and fine-tuning a classification model was to illustrate how to explore and improve the performance of the classification model. We also saw how to dig deeper to understand the performance of a classification model with the help of a confusion matrix. We purposefully looked at a relatively smaller neural network model at the beginning of this chapter and finished with an example of a relatively deeper neural network model. Deeper networks involving several hidden layers can also lead to overfitting problems, where a classification model may have excellent performance with training data but doesn't do well with testing data. To avoid such situations, we can make use of dropout layers after each dense layer, as was illustrated previously. We also illustrated the use of class weights for situations where the class imbalance could cause a classification model to be more biased toward a specific class. Finally, we also saw how we can save the model details for future use when we don't need to rerun the model.

For the models that we used in this chapter, there were certain parameters that we kept constant during the various experiments—for example, when compiling a model, we always used adam as an optimizer. One of the reasons for the popularity of using adam is that it doesn't require much tuning, and provides good results in less time; however, the reader is encouraged to try out other optimizers, such as adagrad, adadelta, and rmsprop, and observe the impact on the classification performance of the model. Another setting that we kept constant in this chapter is the batch size of 32 at the time of training the network. The reader is also encouraged to experiment with higher (such as 64) and lower (such as 16) batch sizes and observe what impact this has on the classification performance.

As we go on to future chapters, we will gradually develop more and more complex and deeper neural network models. Having addressed a classification model where the response variables are categorical, in the next chapter, we will go over the steps for developing and improving the prediction model for the regression type of problems, where the target variable is numeric.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset