Summary

In this chapter, we illustrated the steps for developing a convolutional recurrent neural network for author classification based on articles that they have written. Convolutional recurrent neural networks combine the advantages of two networks into one network. On one hand, convolutional networks can capture high-level local features from the data, while, on the other hand, recurrent networks can capture long-term dependencies in the data involving sequences.

First, convolutional recurrent neural networks extract features using a one-dimensional convolutional layer. These extracted features are then passed to the LSTM recurrent layer to obtain hidden long-term dependencies, which are then passed to a fully connected dense layer. This dense layer obtains the probability of the correct classification of each author based on the data in the articles. Although we used a convolutional recurrent neural network for the author classification problem, this type of deep network can be applied to other types of data involving sequences, such as natural language processing, speech, and video-related problems.

The next chapter will be the last chapter of this book and will go over tips, tricks, and the road ahead. Developing deep learning networks for different types of data is both art and science. Every application brings new challenges, as well as an opportunity for us to learn and improve our skills. In the next chapter, we will summarize some such experiences that can turn out to be very useful in certain applications and help save a significant amount of time in arriving at models that perform well.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset