GAN theory #
TorchLean's baseline GAN spec uses least-squares GAN losses. This avoids partial logarithms in the public spec surface while still capturing the generator/discriminator game:
- the discriminator regresses real samples toward score
1and generated samples toward0; - the generator tries to make generated samples score as real.
These lemmas expose the exact composition and loss decomposition used by examples and downstream verification work.
The mathematical reason this file is more compact than the VAE/VQ-VAE theory files is that the
public spec deliberately chooses the LSGAN square-loss objective rather than the original minimax
log D + log(1-D) objective. That keeps the executable Lean surface total over every supported
scalar backend; logarithmic GAN objectives belong in a domain-restricted layer with explicit
0 < D(x) < 1 assumptions. The facts below are therefore the safe core: they make the
generator/discriminator composition and the score-regression targets transparent, without smuggling
analytic side conditions into a total spec.
- Ian Goodfellow et al., "Generative Adversarial Nets", NeurIPS 2014.
- Xudong Mao et al., "Least Squares Generative Adversarial Networks", ICCV 2017.
- Martin Arjovsky, Soumith Chintala, and Léon Bottou, "Wasserstein GAN", ICML 2017, for the broader critic-vs-discriminator viewpoint that motivates keeping the score head abstract.
Generated samples are obtained by applying the generator to latent noise.
Fake scores are discriminator scores on generated samples.
LSGAN generator loss regresses fake scores toward the real target.
This is the generator side of Mao et al.'s least-squares objective: the generator does not receive the discriminator's "fake" target; it tries to move generated samples onto the real-score target.
LSGAN discriminator loss splits into real-score and fake-score regression terms.
The first term pushes D(x_real) toward 1; the second pushes D(G(z)) toward 0. Keeping this
as a named theorem gives downstream examples and proof files a stable reference point for the game
semantics, instead of requiring them to unfold the spec directly.
Connection to shared objective algebra and equilibrium checks #
Package the LSGAN generator objective as a two-term objective with no regularizer.
Instances For
Package the LSGAN discriminator objective as real-score plus fake-score regression.
Instances For
LSGAN generator loss is the shared two-term objective with zero regularizer.
LSGAN discriminator loss is the shared three-term objective with zero regularizer.
If generated samples receive the real target score, the LSGAN generator loss is zero.
If real samples receive target 1 and generated samples receive target 0, the LSGAN
discriminator loss is zero.