Generator

Now, it's time to implement the second part of the network that will be trying to replicate the original input images using the latent space z. We'll be using tf.variable_scope for this function as well.

So, let's define the function which will return a generated image by the generator:

def generator(z_latent_space, output_channel_dim, is_train=True):


with tf.variable_scope('generator', reuse=not is_train):

#leaky relu parameter
leaky_param_alpha = 0.2

fully_connected_layer = tf.layers.dense(z_latent_space, 2*2*512)

#reshaping the output back to 4D tensor to match the accepted format for convolution layer
reshaped_output = tf.reshape(fully_connected_layer, (-1, 2, 2, 512))
normalized_output = tf.layers.batch_normalization(reshaped_output, training=is_train)
leaky_relu_output = tf.maximum(leaky_param_alpha * normalized_output, normalized_output)

conv_layer_1 = tf.layers.conv2d_transpose(leaky_relu_output, 256, 5, 2, 'valid')
normalized_output = tf.layers.batch_normalization(conv_layer_1, training=is_train)
leaky_relu_output = tf.maximum(leaky_param_alpha * normalized_output, normalized_output)

conv_layer_2 = tf.layers.conv2d_transpose(leaky_relu_output, 128, 5, 2, 'same')
normalized_output = tf.layers.batch_normalization(conv_layer_2, training=is_train)
leaky_relu_output = tf.maximum(leaky_param_alpha * normalized_output, normalized_output)

logits_layer = tf.layers.conv2d_transpose(leaky_relu_output, output_channel_dim, 5, 2, 'same')
output = tf.tanh(logits_layer)

return output
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset