Summary

We started this chapter by understanding the difference between generative and discriminative models. We learned that the discriminative models learn to find the good decision boundary that separates the classes in an optimal way, while the generative models learn about the characteristics of each class.

Later, we understood how GANs work. They basically consist of two neural networks called generators and discriminators. The role of the generators is to generate a new image by learning the real data distribution, while the discriminator acts as a critic and its role is to tell us whether the generated image is from the true data distribution or the fake data distribution, basically whether it is a real image or a fake image.

Next, we learned about DCGAN where we basically replace the feedforward neural networks in the generator and discriminator with convolutional neural networks. The discriminator uses convolutional layers for classifying the image as a fake or a real image, while the generator uses convolutional transpose layers to generate a new image.

Then, we learned about the LSGAN, which replaces the loss function of both the generator and the discriminator with a least squared error loss. Because, when we use sigmoid cross-entropy as a loss function, our gradients tend to vanish once the fake samples on the correct side of the decision boundary even though they are not close to the real distribution. So, we replace the cross-entropy loss with the least squared error loss where the gradients will not vanish till the fake samples match the true distribution. It forces the gradient updates to match the fake samples to the real samples.

Finally, we learned another interesting type of GAN called the Wassetrtain GAN where we use the Wasserstein distance measure in the discriminator's loss function. Because in vanilla GANs we are basically minimizing JS divergence and it will be constant or results in 0 when the distributions of real data and fake does not overlap. To overcome this, we used the Wasserstein distance measure in the discriminator's loss function.

In the next chapter, we will learn about several other interesting types of GANs called CGAN, InfoGAN, CycleGAN, and StackGAN.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset