Second term

Now, let's look at the second term:

Here, implies that we are sampling a random noise from the uniform distribution . implies that the generator takes the random noise as an input and returns a fake image based on its implicitly learned distribution .

implies that we are feeding the fake image generated by the generator to the discriminator and it will return the probability of the fake input image being a real image.

If we subtract from 1, then it will return the probability of the fake input image being a fake image:

Since we know is not a real image, the discriminator will maximize this probability. That is, the discriminator maximizes the probability of being classified as a fake image, so we write:

Instead of maximizing raw probabilities, we maximize the log probability:

implies the expectations of the log likelihood of the input images generated by the generator being fake.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset