Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Generative AI with Python and PyTorch

You're reading from   Generative AI with Python and PyTorch Navigating the AI frontier with LLMs, Stable Diffusion, and next-gen AI applications

Arrow left icon
Product type Paperback
Published in Mar 2025
Publisher Packt
ISBN-13 9781835884447
Length 450 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Joseph Babcock Joseph Babcock
Author Profile Icon Joseph Babcock
Joseph Babcock
Raghav Bali Raghav Bali
Author Profile Icon Raghav Bali
Raghav Bali
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Introduction to Generative AI: Drawing Data from Models 2. Building Blocks of Deep Neural Networks FREE CHAPTER 3. The Rise of Methods for Text Generation 4. NLP 2.0: Using Transformers to Generate Text 5. LLM Foundations 6. Open-Source LLMs 7. Prompt Engineering 8. LLM Toolbox 9. LLM Optimization Techniques 10. Emerging Applications in Generative AI 11. Neural Networks Using VAEs 12. Image Generation with GANs 13. Style Transfer with GANs 14. Deepfakes with GANs 15. Diffusion Models and AI Art 16. Other Books You May Enjoy
17. Index

Discriminative and generative modeling, and Bayes’ theorem

Now, let us consider how these rules of conditional and joint probability relate to the kinds of predictive models that we build for various machine learning applications. In most cases—such as predicting whether an email is fraudulent or the dollar amount of the future lifetime value of a customer—we are interested in the conditional probability, P(Y|X=x), where Y is the set of outcomes we are trying to model and X is the input features, and x is a particular value of the input features. For example, we are trying to calculate the probability that an email is fraudulent based on the knowledge of the set of words (the x) in the message. This approach is known as discriminative modeling15, 16, 17. Discriminative modeling attempts to learn a direct mapping between the data, X, and the outcomes, Y.

Another way to understand discriminative modeling is in the context of Bayes’ theorem18, which relates the conditional and joint probabilities of a dataset, as follows:

P(Y|X) = P(X|Y)P(Y)/P(X) = P(X, Y)/P(X)

As a side note, the theorem was published two years following the author’s death, and in a foreword, Richard Price described it as a mathematical argument for the existence of God, perhaps appropriate given that Thomas Bayes served as a Reverend during his life. In the formula for Bayes’ theorem, the expression P(X|Y)/P(X) is known as the likelihood or the supporting evidence that the observation X gives to the likelihood of observing Y, P(Y) is the prior or the plausibility of the outcome, and P(Y|X) is the posterior or the probability of the outcome given all the independent data we have observed related to the outcome thus far. Conceptually, Bayes’ theorem states that the probability of an outcome is the product of its baseline probability and the probability of the input data conditional on this outcome.

In the context of discriminative learning, we can thus see that a discriminative model directly computes the posterior; we could have a model of the likelihood or prior, but it is not required in this approach. Even though you may not have realized it, most of the models you have probably used in the machine learning toolkit are discriminative, such as:

  • Linear regression
  • Logistic regression
  • Random forests19, 20
  • Gradient-boosted decision trees (GBDTs)21
  • Support vector machines (SVMs)22

The first two (linear and logistic regression) models the outcome Y conditional on the data X using a Normal or Gaussian (linear regression) or sigmoidal (logistic regression) probability function. In contrast, the last three have no formal probability model—they compute a function (an ensemble of trees for random forests or GBDTs, or an inner product distribution for SVM) that maps X to Y, using a loss or error function to tune those estimates; given this nonparametric nature, some authors have argued that these constitute a separate class of “non-model” or “non-parametric” discriminative algorithms15.

In contrast, a generative model attempts to learn the joint distribution P(Y, X) of the labels and the input data. Recall that using the definition of joint probability:

P(X, Y) = P(X|Y)P(Y)

We can rewrite Bayes’ theorem as:

P(Y|X) = P(X, Y)/P(X)

Instead of learning a direct mapping of X to Y using P(Y|X), as in the discriminative case, our goal is to model the joint probabilities of X and Y using P(X, Y). While we can use the resulting joint distribution of X and Y to compute the posterior P(Y|X) and learn a “targeted” model, we can also use this distribution to sample new instances of the data by either jointly sampling new tuples (x, y), or sampling new data inputs using a target label Y with the expression:

P(X|Y=y) = P(X, Y)/P(Y)

Examples of generative models include:

  • Naive Bayes classifiers
  • Gaussian mixture models
  • Latent Dirichlet allocation (LDA)
  • Hidden Markov models
  • Deep Boltzmann machines
  • VAEs
  • GANs

Naive Bayes classifiers, though named as a discriminative model, utilize Bayes’ theorem to learn the joint distribution of X and Y under the assumption that the X variables are independent. Similarly, Gaussian mixture models describe the likelihood of a data point belonging to one of a group of normal distributions using the joint probability of the label and these distributions. LDA represents a document as the joint probability of a word and a set of underlying keyword lists (topics) that are used in a document. Hidden Markov models express the joint probability of a state and the next state of a piece of data, such as the weather on successive days of the week. The VAE and GAN models we cover in Chapters 3–6 also utilize joint distributions to map between complex data types—this mapping allows us to generate data from random vectors or transform one kind of data into another.

As mentioned previously, another view of generative models is that they allow us to generate samples of X if we know an outcome Y. In the first four models listed previously, this conditional probability is just a component of the model formula, with the posterior estimates still being the ultimate objective. However, in the last three examples, which are all deep neural network models, learning the conditional probability of X dependent upon a hidden or “latent” variable Z is actually the main objective, in order to generate new data samples. Using the rich structure allowed by multi-layered neural networks, these models can approximate the distribution of complex data types such as images, natural language, and sound. Also, instead of being a target value, Z is often a random number in these applications, serving merely as an input from which to generate a large space of hypothetical data points. To the extent we have a label (such as whether a generated image should be of a dog or dolphin, or the genre of a generated song), the model is P(X|Y=y, Z=z), where the label Y “controls” the generation of data that is otherwise unrestricted by the random nature of Z.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime