How it works...

In step 1, we have added a predictSequence label for the output layer. Note that we mentioned the input layer reference when defining the output layer. In step 2, we specified it as L1, which is the LSTM input layer created in the previous recipe. We need to mention this to avoid any errors during execution due to disconnection between the LSTM layer and the output layer. Also, the output layer definition should have the same layer name we specified in the setOutput() method.

In step 2, we have used RnnOutputLayer to construct the output layer. This DL4J output layer implementation is used for use cases that involve recurrent neural networks. It is functionally the same as OutputLayer in multi-layer perceptrons, but output and label reshaping are automatically handled. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset