Recurrent-Type Neural Networks - Language Modeling

 Recurrent neural networks (RNNs) are a class of deep learning architectures that are widely used for natural language processing. This set of architectures enables us to provide contextual information for current predictions and also have specific architecture that deals with long-term dependencies in any input sequence. In this chapter, we'll demonstrate how to make a sequence-to-sequence model, which will be useful in many applications in NLP. We will demonstrate these concepts by building a character-level language model and see how our model generates sentences similar to original input sequences.

The following topics will be covered in this chapter:

  • The intuition behind RNNs
  • LSTM networks
  • Implementation of the language model
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset