Vanilla GAN
We will now apply the concepts and train a GAN from scratch to generate MNIST digits. The overall GAN setup is visualized in Figure 12.8. The figure outlines a generator model, with noise vector as input and repeating blocks that transform and scale up the vector to the required dimensions. Each block consists of a dense layer, followed by Leaky-RELU activation and a batch-normalization layer. We simply reshape the output from the final block to transform it into the required output image size.

Figure 12.8: Vanilla GAN architecture
On the other hand, the discriminator is a simple feedforward network. This model takes an image as input (a real image or the fake output from the generator) and classifies it as real or fake. This simple setup of two competing models helps us train the overall GAN.
The first and foremost step is to define the discriminator model. In this implementation, we will use a very basic multi-layer perceptron, or MLP, as a discriminator...