Summary
In this chapter, we introduced a new class of generative models called Generative Adversarial Networks (GANs). Inspired by the concepts of game theory, GANs present an implicit method of modeling the data generation probability density. We started by understanding the finer details of how GANs actually work by covering key concepts, such as the value function for the minimax game, as well as a few variants, like the non-saturating generator loss and the maximum likelihood game. We developed a multi-layer perceptron-based vanilla GAN to generate MNIST digits from scratch.
Then, we touched upon a few improved GANs in the form of deep convolutional GANs (DCGANs), conditional GANs, and finally, an advanced variant called progressive GANs. We went through the nitty-gritty of this advanced setup and used a pretrained model to generate fake faces. In the final section, we discussed a few common challenges associated with GANs.
This chapter was the foundation required before...