Convolutional recurrent networks

Convolutional neural networks (CNNs) are useful for capturing high-level local features from the image or text data, and LSTM networks can capture long-term dependencies in the data involving sequences. When we use both CNNs and a recurrent network in the same model architecture, it is called a convolutional recurrent neural network (CRNN). As an example, if we consider data on articles and their authors, we may be interested in developing an author classification model where we can train a network to take text data containing an article as input and then help to make a prediction in terms of probability regarding the author. For this, we can first use a one-dimensional convolutional layer to extract important features from the data. These extracted features can then be passed to the LSTM recurrent layer to obtain the hidden long-term dependencies, which, in turn, are passed to a fully connected dense layer. This dense layer can then obtain the probability of correct authorship. CRNNs can also be applied to problems related to natural language processing, speech, and video. In Chapter 12, Text Classification Using Convolutional Recurrent Networks, we illustrate the use of CRNNs for developing a model that can classify an author, based on articles written by them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset