Defining the discriminator

We define a discriminator as a convolutional network with three convolutional layers followed by a fully connected layer. It is composed of a series of convolutional and batch norm layers with leaky ReLU activations. We apply batch normalization at all layers except at the input layer:

def discriminator(input_images, reuse=False, is_training=False, alpha=0.1):

with tf.variable_scope('discriminator', reuse= reuse):

First convolutional layer with leaky ReLU activation:

        layer1 = tf.layers.conv2d(input_images, 
filters=64,
kernel_size=5,
strides=2,
padding='same',
kernel_initializer=kernel_init,
name='conv1')

layer1 = tf.nn.leaky_relu(layer1, alpha=0.2, name='leaky_relu1')

Second convolutional layer with batch normalization and leaky ReLU activation:

        layer2 = tf.layers.conv2d(layer1, 
filters=128,
kernel_size=5,
strides=2,
padding='same',
kernel_initializer=kernel_init,
name='conv2')
layer2 = tf.layers.batch_normalization(layer2, training=is_training, name='batch_normalization2')

layer2 = tf.nn.leaky_relu(layer2, alpha=0.2, name='leaky_relu2')

Third convolutional layer with batch normalization and leaky ReLU:

        layer3 = tf.layers.conv2d(layer2, 
filters=256,
kernel_size=5,
strides=1,
padding='same',
name='conv3')
layer3 = tf.layers.batch_normalization(layer3, training=is_training, name='batch_normalization3')
layer3 = tf.nn.leaky_relu(layer3, alpha=0.1, name='leaky_relu3')

Flatten the output of the final convolutional layer:

        layer3 = tf.reshape(layer3, (-1, layer3.shape[1]*layer3.shape[2]*layer3.shape[3]))

Define the fully connected layer and return the logits:

        logits = tf.layers.dense(layer3, 1)

output = tf.sigmoid(logits)

return logits
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset