Summary

We started the chapter by learning about conditional GANs and how they can be used to generate our image of interest.

Later, we learned about InfoGANs, where the code c is inferred automatically based on the generated output, unlike CGAN, where we explicitly specify c. To infer c, we need to find the posterior, , which we don't have access to. So, we use an auxiliary distribution. We used mutual information to maximize the mutual information, , to maximize our knowledge about c given the generator output.

Then, we learned about CycleGANs, which map the data from one domain to another domain. We tried to learn the mapping from the distribution of images from photos domain to the distribution of images in paintings domain. Finally, we understood how StackGANs generate photorealistic images from a text description.

In the next chapter, we will learn about autoencoders and their types.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset