Running optimization

The next step is to run optimizer optimization. Executing this process in TensorFlow consists of two steps:

  1. The first step is parameter initialization of the variables defined in the graph. The initialization is performed by calling the global_variables_initializer function from TensorFlow:
# Initializing the variables
init = tf$global_variables_initializer()
sess$run(init)

Optimization is performed based on optimizing and monitoring the train and test performance:

costconvergence<-NULL
for (step in 1:1000) {
sess$run(optimizer)
if (step %% 20==0){
costconvergence<-rbind(costconvergence, c(step, sess$run(cost), sess$run(costt)))
cat(step, "-", "Traing Cost ==>", sess$run(cost), " ")
}
}
  1. The cost function from train and test can be observed to understand convergence of the model, as shown in the following figure:
costconvergence<-data.frame(costconvergence)
colnames(costconvergence)<-c("iter", "train", "test")
plot(costconvergence[, "iter"], costconvergence[, "train"], type = "l", col="blue", xlab = "Iteration", ylab = "MSE")
lines(costconvergence[, "iter"], costconvergence[, "test"], col="red")
legend(500,0.25, c("Train","Test"), lty=c(1,1), lwd=c(2.5,2.5),col=c("blue","red"))

This graph shows that the model major convergence is at around 400 iterations; however, it is still converging at a very slow rate even after 1,000 iterations. The model is stable in both the train and holdout test datasets.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset