In this experiment, we will make use of the same network architecture as the previous one. However, we will use it for generating a handwritten image of digit eight. The discriminator and GAN loss values for 100 iterations for this experiment can be plotted as follows:
From the preceding plot, we can make the following observations:
- The discriminator and GAN loss values show variability that tends to reduce as the number of iterations goes from 1 to 100.
- High spikes at certain intervals for the GAN loss are diminishing as the network's training proceeds.
A plot of the first fake image from each iteration is as follows:
Compared to digit five, digit eight takes more iterations before it starts to form a recognizable pattern.
In this section, we experimented with additional convolutional layers in the generator and the discriminator networks. Due to this, we can make the following observations:
- Additional convolutional layers seem to have a positive impact on the generation of fake images that began to look like handwritten images of digit five much quicker.
- Although the results for the data that we referred to in this chapter were decent, for other data, we may have to make other changes to the model architecture.
- We also used the network with the same architecture to generate realistic-looking fake images of handwritten digit eight. It was observed that, for digit eight, it took more iterations of training the network before a recognizable pattern started to emerge.
- Note that a network for generating all 10 handwritten digits at the same time can be more complex and is likely to require many more iterations.
- Similarly, if we have color images that have significantly larger dimensions than 28 x 28, which is what we used for this chapter, we will need more computational resources and the task will be even more challenging.