The generator component of a GAN is a generative model. When we say the generative model, there are two types of generative models— an implicit and an explicit density model. The implicit density model does not use any explicit density function to learn the probability distribution, whereas the explicit density model, as the name suggests, uses an explicit density function. GANs falls into the first category. That is, they are an implicit density model. Let's study in detail and understand how GANs are an implicit density model.
Let's say we have a generator, . It is basically a neural network parametrized by . The role of the generator network is to generate new images. How do they do that? What should be the input to the generator?
We sample a random noise, , from a normal or uniform distribution, . We feed this random noise, , as an input to the generator and then it converts this noise to an image:
Surprising, isn't it? How does the generator converts a random noise to a realistic image?
Let's say we have a dataset containing a collection of human faces and we want our generator to generate a new human face. First, the generator learns all the features of the face by learning the probability distribution of the images in our training set. Once the generator learns the correct probability distribution, it can generate totally new human faces.
But how does the generator learn the distribution of the training set? That is, how does the generator learn the distribution of images of human faces in the training set?
A generator is nothing but a neural network. So, what happens is that the neural network learns the distribution of the images in our training set implicitly; let's call this distribution a generator distribution, . At the first iteration, the generator generates a really noisy image. But over a series of iterations, it learns the exact probability distribution of our training set and learns to generate a correct image by tuning its parameter.