Generating Song Lyrics Using RNN

In a normal feedforward neural network, each input is independent of other input. But with a sequential dataset, we need to know about the past input to make a prediction. A sequence is an ordered set of items. For instance, a sentence is a sequence of words. Let's suppose that we want to predict the next word in a sentence; to do so, we need to remember the previous words. A normal feedforward neural network cannot predict the correct next word, as it will not remember the previous words of the sentence. Under such circumstances (in which we need to remember the previous input), to make predictions, we use recurrent neural networks (RNNs).

In this chapter, we will describe how an RNN is used to model sequential datasets and how it remembers the previous input. We will begin by investigating how an RNN differs from a feedforward neural network. Then, we will inspect how forward propagation works in an RNN.

Moving on, we will examine the backpropagation through time (BPTT) algorithm, which is used for training RNNs. Later, we will look at the vanishing and exploding gradient problem, which occurs while training recurrent networks. You will also learn how to generate song lyrics using an RNN in TensorFlow.

At the end of the chapter, will we examine the different types of RNN architectures, and how they are used for various applications.

In this chapter, we will learn about the following topics:

  • Recurrent neural networks
  • Forward propagation in RNNs
  • Backpropagation through time
  • The vanishing and exploding gradient problem
  • Generating song lyrics using RNNs
  • Different types of RNN architectures
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset