How to set up an autoencoder model

The next step is to set up the autoencoder model. Let's set up a vanilla autoencoder using TensorFlow:

  1. Reset the graph and start InteractiveSession:
# Reset the graph and set-up a interactive session
tf$reset_default_graph()
sess<-tf$InteractiveSession()
  1. Define the input parameter where n and m are the number of samples and features, respectively. To build, network m is used to set up the input parameter:
# Network Parameters
n_hidden_1 = 5 # 1st layer num features
n_input = length(xFeatures) # Number of input features
nRow<-nrow(occupancy_train)

When n_hidden_1 is low, the autoencoder is compressing the data and is referred to as an under-complete autoencoder; whereas, when n_hidden_1 is large, then the autoencoder is sparse and is referred to as an over-complete autoencoder.

  1. Define graph input parameters that include the input tensor and layer definitions for the encoder and decoder:
# Define input feature
x <- tf$constant(unlist(occupancy_train[, xFeatures]), shape=c(nRow, n_input), dtype=np$float32)

# Define hidden and bias layer for encoder and decoders
hiddenLayerEncoder<-tf$Variable(tf$random_normal(shape(n_input, n_hidden_1)), dtype=np$float32)
biasEncoder <- tf$Variable(tf$zeros(shape(n_hidden_1)), dtype=np$float32)
hiddenLayerDecoder<-tf$Variable(tf$random_normal(shape(n_hidden_1, n_input)))
biasDecoder <- tf$Variable(tf$zeros(shape(n_input)))

The preceding script designs a single-layer encoder and decoder.

  1. Define a function to evaluate the response:
auto_encoder<-function(x, hiddenLayerEncoder, biasEncoder){
x_transform <- tf$nn$sigmoid(tf$add(tf$matmul(x, hiddenLayerEncoder), biasEncoder))
x_transform
}

The auto_encoder function takes the node bias weights and computes the output. The same function can be used for encoder and decoder by passing respective weights.

  1. Create encoder and decoder objects by passing symbolic TensorFlow variables:
encoder_obj = auto_encoder(x,hiddenLayerEncoder, biasEncoder)
y_pred = auto_encoder(encoder_obj, hiddenLayerDecoder, biasDecoder)
  1. The y_pred is the outcome from decoder, which takes the encoder object as input with nodes and bias weights:
Define loss function and optimizer module. 
learning_rate = 0.01
cost = tf$reduce_mean(tf$pow(x - y_pred, 2))
optimizer = tf$train$RMSPropOptimizer(learning_rate)$minimize(cost)

The preceding script defines mean square error as the cost function, and uses RMSPropOptimizer from TensorFlow with 0.1 learning rate for the optimization of weights. The TensorFlow graph for the preceding model is shown in the following diagram:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset