How it works...

Any changes to the structure require model retraining. However, these assumptions may not be valid for a lot of sequential datasets, such as text-based classifications that may have varying input and output. RNN architecture helps to address the issue of variable input length.

The standard architecture for RNN with input and output is shown in the following figure:

Recurrent Neural Network architecture

The RNN architecture can be formulated as follows:

Where is state at time/index t and is input at time/index t. The matrix W represents weights to connect hidden nodes and S connects input with the hidden layer. The output node at time/index t is related to state ht as shown as follows:

In the previous Equations layer, weights remain constant across state and time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset