Summary

In this chapter, you learned about two deep learning algorithms that don't require pre-training: deep neural networks with dropout and CNN. The key to high precision rates is how we make the network sparse, and dropout is one technique to achieve this. Another technique is the rectifier, the activation function that can solve the problem of saturation that occurred in the sigmoid function and the hyperbolic tangent. CNN is the most popular algorithm for image recognition and has two features: convolution and max-pooling. Both of these attribute the model to acquire translation invariance. If you are interested in how dropout, rectifier, and other activation functions contribute to the performance of neural networks, the following could be good references: Deep Sparse Rectifier Neural Networks (Glorot, et. al. 2011, http://www.jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf), ImageNet Classification with Deep Convolutional Neural Networks (Krizhevsky et. al. 2012, https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf), and Maxout Networks (Goodfellow et al. 2013, http://arxiv.org/pdf/1302.4389.pdf).

While you now know the popular and useful deep learning algorithms, there are still many of them that have not been mentioned in this book. This field of study is getting more and more active, and more and more new algorithms are appearing. But don't worry, as all the algorithms are based on the same root: neural networks. Now you know the way of thinking required to grasp or implement the model, you can fully understand whatever models you encounter.

We've implemented deep learning algorithms from scratch so you fully understand them. In the next chapter, you'll see how we can implement them with deep learning libraries to facilitate our research or applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset