Training the LSTM model

Start the TensorFlow session and initialize all the variables:

session = tf.Session()
session.run(tf.global_variables_initializer())

Set the number of epochs:

epochs = 100

Then, for each iteration, perform the following:

for i in range(epochs):

train_predictions = []
index = 0
epoch_loss = []

Then sample a batch of data and train the network:

    while(index + batch_size) <= len(X_train):

X_batch = X_train[index:index+batch_size]
y_batch = y_train[index:index+batch_size]

#predict the price and compute the loss
predicted, loss_val, _ = session.run([y_hat, loss, optimizer], feed_dict={input:X_batch, target:y_batch})

#store the loss in the epoch_loss list
epoch_loss.append(loss_val)

#store the predictions in the train_predictions list
train_predictions.append(predicted)
index += batch_size

Print the loss on every 10 iterations:

     if (i % 10)== 0:
print 'Epoch {}, Loss: {} '.format(i,np.mean(epoch_loss))

As you may see in the following output, the loss decreases over the epochs:

Epoch 0, Loss: 0.0402321927249 
Epoch 10, Loss: 0.0244581680745 
Epoch 20, Loss: 0.0177710317075 
Epoch 30, Loss: 0.0117778982967 
Epoch 40, Loss: 0.00901956297457 
Epoch 50, Loss: 0.0112476013601 
Epoch 60, Loss: 0.00944950990379 
Epoch 70, Loss: 0.00822851061821 
Epoch 80, Loss: 0.00766260037199 
Epoch 90, Loss: 0.00710930628702 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset