How it works...

In step 2, we pass both the dataset iterator and epoch count to start the training session. We use a very large time series dataset, hence a large epoch value will result in more training time. Also, a large epoch may not always guarantee good results, and may end up overfitting. So, we need to run the training experiment multiple times to arrive at an optimal value for epochs and other important hyperparameters. An optimal value would be the bound where you observe the maximum performance for the neural network.

Effectively, we are optimizing our training process using memory-gated cells in layers. As we discussed earlier, in the Constructing input layers for the network recipe, LSTMs are good for holding long-term dependencies in datasets. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset