Defining generator

Generator which takes the noise, , and also a variable, , as an input and returns an image. Instead of using a fully connected layer in the generator, we use a deconvolutional network, just like when we studied DCGANs:

def generator(c, z,reuse=None):

First, concatenate the noise, z, and the variable, :

    input_combined = tf.concat([c, z], axis=1)

Define the first layer, which is a fully connected layer with batch normalization and ReLU activations:

    fuly_connected1 = tf.layers.dense(input_combined, 1024)
batch_norm1 = tf.layers.batch_normalization(fuly_connected1, training=is_train)
relu1 = tf.nn.relu(batch_norm1)

Define the second layer, which is also fully connected with batch normalization and ReLU activations:

    fully_connected2 = tf.layers.dense(relu1, 7 * 7 * 128)
batch_norm2 = tf.layers.batch_normalization(fully_connected2, training=is_train)
relu2 = tf.nn.relu(batch_norm2)

Flatten the result of the second layer:

    relu_flat = tf.reshape(relu2, [batch_size, 7, 7, 128])

The third layer consists of deconvolution; that is, a transpose convolution operation and it is followed by batch normalization and ReLU activations:

    deconv1 = tf.layers.conv2d_transpose(relu_flat, 
filters=64,
kernel_size=4,
strides=2,
padding='same',
activation=None)
batch_norm3 = tf.layers.batch_normalization(deconv1, training=is_train)
relu3 = tf.nn.relu(batch_norm3)

The fourth layer is another transpose convolution operation:

    deconv2 = tf.layers.conv2d_transpose(relu3, 
filters=1,
kernel_size=4,
strides=2,
padding='same',
activation=None)

Apply sigmoid function to the result of the fourth layer and get the output:

    output = tf.nn.sigmoid(deconv2) 

return output
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset