Discriminator loss

The discriminator loss is given as follows:

First, we will implement the first term, .

The first term, , implies the expectations of the log likelihood of images sampled from the real data distribution being real.

It is basically the binary cross-entropy loss. We can implement binary cross-entropy loss with the tf.nn.sigmoid_cross_entropy_with_logits() TensorFlow function. It takes two parameters as inputs, logits and labels, explained as follows:

  • The logits input, as the name suggests, is the logits of the network so it is D_logits_real.
  • The labels input, as the name suggests, is the true label. We learned that discriminator should return 1 for real images and 0 for fake images. Since we are calculating the loss for input images sampled from the real data distribution, the true label is 1.

We use tf.ones_likes() for setting the labels to 1 with the same shape as D_logits_real. That is, labels = tf.ones_like(D_logits_real).

Then we compute the mean loss using tf.reduce_mean(). If you notice, there is a minus sign in our loss function, which we added for converting our loss to a minimization objective. But, in the following code, there is no minus sign, because TensorFlow optimizers will only minimize and not maximize. So we don't have to add minus sign in our implementation because in any case, it will be minimized by the TensorFlow optimizer:

D_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logits_real,
labels=tf.ones_like(D_logits_real)))

Now we will implement the second term, .

The second term,, implies the expectations of the log likelihood of images generated by the generator being fake.

Similar to the first term, we can use tf.nn.sigmoid_cross_entropy_with_logits() for calculating the binary cross-entropy loss. In this, the following holds true:

  • Logits is D_logits_fake
  • Since we are calculating the loss for the fake images generated by the generator, the true label is 0

We use tf.zeros_like() for setting the labels to 0 with the same shape as D_logits_fake. That is, labels = tf.zeros_like(D_logits_fake):

D_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logits_fake,
labels=tf.zeros_like(D_logits_fake)))

Now we will implement the final loss.

So, combining the preceding two terms, the loss function of the discriminator is given as follows:

D_loss = D_loss_real + D_loss_fake
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset