Setting up a denoising autoencoder

The next step is to set up the autoencoder model:

  1. First, reset the graph and start an interactive session as follows:
# Reset the graph and set-up an interactive session
tf$reset_default_graph()
sess<-tf$InteractiveSession()
  1. The next step is to define two placeholders for the input signal and corrupt signal:
# Define Input as Placeholder variables
x = tf$placeholder(tf$float32, shape=shape(NULL, img_size_flat), name='x')
x_corrput<-tf$placeholder(tf$float32, shape=shape(NULL, img_size_flat), name='x_corrput')

The x_corrupt will be used as input in the autoencoder, and x is the actual image that will be used as output.

  1. Set up a denoising autoencoder function as shown in the following code:
# Setting-up denoising autoencoder
denoisingAutoencoder<-function(x, x_corrput, img_size_flat=3072, hidden_layer=c(1024, 512), out_img_size=256){

# Building Encoder
encoder = NULL
n_input<-img_size_flat
curentInput<-x_corrput
layer<-c(hidden_layer, out_img_size)
for(i in 1:length(layer)){
n_output<-layer[i]
W = tf$Variable(tf$random_uniform(shape(n_input, n_output), -1.0 / tf$sqrt(n_input), 1.0 / tf$sqrt(n_input)))
b = tf$Variable(tf$zeros(shape(n_output)))
encoder<-c(encoder, W)
output = tf$nn$tanh(tf$matmul(curentInput, W) + b)
curentInput = output
n_input<-n_output
}

# latent representation
z = curentInput
encoder<-rev(encoder)
layer_rev<-c(rev(hidden_layer), img_size_flat)

# Build the decoder using the same weights
decoder<-NULL
for(i in 1:length(layer_rev)){
n_output<-layer_rev[i]
W = tf$transpose(encoder[[i]])
b = tf$Variable(tf$zeros(shape(n_output)))
output = tf$nn$tanh(tf$matmul(curentInput, W) + b)
curentInput = output
}

# now have the reconstruction through the network
y = curentInput

# cost function measures pixel-wise difference
cost = tf$sqrt(tf$reduce_mean(tf$square(y - x)))
return(list("x"=x, "z"=z, "y"=y, "x_corrput"=x_corrput, "cost"=cost))
}

  1. Create the denoising object:
# Create denoising AE object
dae_obj<-denoisingAutoencoder(x, x_corrput, img_size_flat=3072, hidden_layer=c(1024, 512), out_img_size=256)
  1. Set the cost function:
# Learning set-up
learning_rate = 0.001
optimizer = tf$train$AdamOptimizer(learning_rate)$minimize(dae_obj$cost)
  1. Run optimization:
# We create a session to use the graph
sess$run(tf$global_variables_initializer())
for(i in 1:500){
spls <- sample(1:dim(xcorr)[1],1000L)
if (i %% 1 == 0) {
x_corrput_ds<-add_noise(train_data$images[spls, ], frac = 0.3, corr_type = "masking")
optimizer$run(feed_dict = dict(x=train_data$images[spls, ], x_corrput=x_corrput_ds))
trainingCost<-dae_obj$cost$eval((feed_dict = dict(x=train_data$images[spls, ], x_corrput=x_corrput_ds)))
cat("Training Cost - ", trainingCost, " ")
}
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset