Generator loss

The generator loss is given as .

It implies the probability of the fake image being classified as a real image. As we calculated binary cross-entropy in the discriminator, we use tf.nn.sigmoid_cross_entropy_with_logits() for calculating the loss in the generator.

Here, the following should be borne in mind:

  • Logits is D_logits_fake.

  • Since our loss implies the probability of the fake input image being classified as real, the true label is 1. Because, as we learned, the goal of the generator is to generate the fake image and fool the discriminator to classify the fake image as a real image.

We use tf.ones_like() for setting the labels to 1 with the same shape as D_logits_fake. That is, labels = tf.ones_like(D_logits_fake):

G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logits_fake, labels=tf.ones_like(D_logits_fake)))
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset