Discriminator and generator losses

In this part, we need to define the discriminator and generator losses, and this can be considered to be the most tricky part of this implementation.

We know that the generator tries to replicate the original images and that the discriminator works as a judge, receiving both images from the generator and the original input images. So while designing our loss for each part, we need to target two things.

First, we need the discriminator part of the network to be able to distinguish between the fake images generated by the generator and the real images coming from the original training examples. During training time, we will feed the discriminator part with a batch that is divided into two categories. The first category will be images from the original input and the second category will be images from the fake ones that got generated by the generator.

So the final general loss of the discriminator will be the sum of its ability to accept the real ones as real and detect the fake ones as fake; then the final total loss will be:

tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits_layer, labels=labels))

So we need to calculate two losses to come up with the final discriminator loss.

The first loss, disc_loss_real, will be calculated based on the logits values that we will get from the discriminator and the labels, which will be all ones in this case since we know that all the images in this mini-batch are all coming from the real input images of the MNIST dataset. To enhance the ability of the model to generalize on the test set and give better results, people have found that practically changing the value of 1 to 0.9 is better. This kind of change to the label introduces something called label smooth:

 labels = tf.ones_like(tensor) * (1 - smooth)

For the second part of the discriminator loss, which is the ability of the discriminator to detect fake images, the loss will be between the logits values that we will get from the discriminator and labels; all of these are zeros since we know that all the images in this mini-batch are coming from the generator, and not from the original input.

Now that we have discussed the discriminator loss, we need to calculate the generator loss as well. The generator loss will be called gen_loss, which will be the loss between disc_logits_fake (the output of the discriminator for the fake images) and the labels (which will be all ones since the generator is trying to convince the discriminator with its design of the fake image):


# calculating the losses of the discrimnator and generator
disc_labels_real = tf.ones_like(disc_logits_real) * (1 - label_smooth)
disc_labels_fake = tf.zeros_like(disc_logits_fake)

disc_loss_real = tf.nn.sigmoid_cross_entropy_with_logits(labels=disc_labels_real, logits=disc_logits_real)
disc_loss_fake = tf.nn.sigmoid_cross_entropy_with_logits(labels=disc_labels_fake, logits=disc_logits_fake)

#averaging the disc loss
disc_loss = tf.reduce_mean(disc_loss_real + disc_loss_fake)

#averaging the gen loss
gen_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
labels=tf.ones_like(disc_logits_fake),
logits=disc_logits_fake))
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset