Total loss

We just learned the loss function of the generator and the discriminator combining these two losses, and we write our final loss function as follows:

So, our objective function is basically a min-max objective function, that is, a maximization for the discriminator and minimization for the generator, and we find the optimal generator parameter, , and discriminator parameter, , through backpropagating the respective networks.

So, we perform gradient ascent; that is, maximization on the discriminator:

And, we perform gradient descent; that is, minimization on the generator:

However, optimizing the preceding generator objective function does not work properly and causes a stability issue. So, we introduce a new form of loss called heuristic loss.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset