Breaking down the generator

The generator component of a GAN is a generative model. When we say the generative model, there are two types of generative models— an implicit and an explicit density model. The implicit density model does not use any explicit density function to learn the probability distribution, whereas the explicit density model, as the name suggests, uses an explicit density function. GANs falls into the first category. That is, they are an implicit density model. Let's study in detail and understand how GANs are an implicit density model.

Let's say we have a generator, . It is basically a neural network parametrized by . The role of the generator network is to generate new images. How do they do that? What should be the input to the generator?

We sample a random noise, , from a normal or uniform distribution, . We feed this random noise, , as an input to the generator and then it converts this noise to an image:

Surprising, isn't it? How does the generator converts a random noise to a realistic image?

Let's say we have a dataset containing a collection of human faces and we want our generator to generate a new human face. First, the generator learns all the features of the face by learning the probability distribution of the images in our training set. Once the generator learns the correct probability distribution, it can generate totally new human faces.

But how does the generator learn the distribution of the training set? That is, how does the generator learn the distribution of images of human faces in the training set?

A generator is nothing but a neural network. So, what happens is that the neural network learns the distribution of the images in our training set implicitly; let's call this distribution a generator distribution, . At the first iteration, the generator generates a really noisy image. But over a series of iterations, it learns the exact probability distribution of our training set and learns to generate a correct image by tuning its parameter.

It is important to note that we are not using the uniform distribution for learning the distribution of our training set. It is only used for sampling random noise, and we feed this random noise as an input to the generator. The generator network implicitly learns the distribution of our training set and we call this distribution a generator distribution, and that is why we call our generator network an implicit density model.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset