How to do it...

  1. The data is imported using the standard function from R, as shown in the following code. The data is imported using the read.csv file and transformed into the matrix format followed by selecting the features used for the modeling as defined in xFeatures and yFeatures:
# Loading input and test data
xFeatures = c("Temperature", "Humidity", "Light", "CO2", "HumidityRatio")
yFeatures = "Occupancy"
occupancy_train <-as.matrix(read.csv("datatraining.txt",stringsAsFactors = T))
occupancy_test <- as.matrix(read.csv("datatest.txt",stringsAsFactors = T))

# subset features for modeling and transform to numeric values
occupancy_train<-apply(occupancy_train[, c(xFeatures, yFeatures)], 2, FUN=as.numeric)
occupancy_test<-apply(occupancy_test[, c(xFeatures, yFeatures)], 2, FUN=as.numeric)

# Data dimensions
nFeatures<-length(xFeatures)
nRow<-nrow(occupancy_train)
  1. Now load both the network and model parameters. The network parameters define the structure of the neural network and the model parameters define its tuning criteria. As stated earlier, the neural network is built using two hidden layers, each with five neurons. The n_input parameter defines the number of independent variables and n_classes defines one fewer than the number of output classes. In cases where the output variable is one-hot encoded (one attribute with yes occupancy and a second attribute with no occupancy), then n_classes will be 2L (equal to the number of one-hot encoded attributes). Among model parameters, the learning rate is 0.001 and the number of epochs (or iterations) for model building is 10000:
# Network Parameters
n_hidden_1 = 5L # 1st layer number of features
n_hidden_2 = 5L # 2nd layer number of features
n_input = 5L # 5 attributes
n_classes = 1L # Binary class

# Model Parameters
learning_rate = 0.001
training_epochs = 10000
  1. The next step in TensorFlow is to set up a graph to run the optimization. Before setting up the graph, let's reset the graph using the following command:
# Reset the graph
tf$reset_default_graph()
  1. Additionally, let's start an interactive session as it will allow us to execute variables without referring to the session-to-session object:
# Starting session as interactive session
sess<-tf$InteractiveSession()
  1. The following script defines the graph input (x for independent variables and y for dependent variable). The input feature x is defined as a constant as it will be input to the system. Similarly, the output feature y is also defined as a constant with the float32 type:
# Graph input
x = tf$constant(unlist(occupancy_train[,xFeatures]), shape=c(nRow, n_input), dtype=np$float32)
y = tf$constant(unlist(occupancy_train[,yFeatures]), dtype="float32", shape=c(nRow, 1L))
  1. Now, let's create a multilayer perceptron with two hidden layers. Both the hidden layers are built using the ReLU activation function and the output layer is built using the linear activation function. The weights and biases are defined as variables that will be optimized during the optimization process. The initial values are randomly selected from a normal distribution. The following script is used to initialize and store a hidden layer's weights and biases along with a mulitilayer perceptron model:
# Initializes and store hidden layer's weight & bias
weights = list(
"h1" = tf$Variable(tf$random_normal(c(n_input, n_hidden_1))),
"h2" = tf$Variable(tf$random_normal(c(n_hidden_1, n_hidden_2))),
"out" = tf$Variable(tf$random_normal(c(n_hidden_2, n_classes)))
)
biases = list(
"b1" = tf$Variable(tf$random_normal(c(1L,n_hidden_1))),
"b2" = tf$Variable(tf$random_normal(c(1L,n_hidden_2))),
"out" = tf$Variable(tf$random_normal(c(1L,n_classes)))
)

# Create model
multilayer_perceptron <- function(x, weights, biases){
# Hidden layer with RELU activation
layer_1 = tf$add(tf$matmul(x, weights[["h1"]]), biases[["b1"]])
layer_1 = tf$nn$relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf$add(tf$matmul(layer_1, weights[["h2"]]), biases[["b2"]])
layer_2 = tf$nn$relu(layer_2)
# Output layer with linear activation
out_layer = tf$matmul(layer_2, weights[["out"]]) + biases[["out"]]
return(out_layer)
}
  1. Now, construct the model using the initialized weights and biases:
pred = multilayer_perceptron(x, weights, biases)
  1. The next step is to define the cost and optimizer functions of the neural network:
# Define cost and optimizer
cost = tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf$train$AdamOptimizer(learning_rate=learning_rate)$minimize(cost)
  1. The neural network is set up using sigmoid cross entropy as the cost function. The cost function is then passed to a gradient descent optimizer (Adam) with a learning rate of 0.001. Before running the optimization, initialize the global variables as follows:
# Initializing the global variables
init = tf$global_variables_initializer()
sess$run(init)
  1. Once the global variables are initialized along with the cost and optimizer functions, let's begin training on the train dataset:
# Training cycle
for(epoch in 1:training_epochs){
sess$run(optimizer)
if (epoch %% 20== 0)
cat(epoch, "-", sess$run(cost), "n")
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset