TorchLean API

NN.MLTheory.Generative.Latent.GAN

GAN theory #

TorchLean's baseline GAN spec uses least-squares GAN losses. This avoids partial logarithms in the public spec surface while still capturing the generator/discriminator game:

These lemmas expose the exact composition and loss decomposition used by examples and downstream verification work.

The mathematical reason this file is more compact than the VAE/VQ-VAE theory files is that the public spec deliberately chooses the LSGAN square-loss objective rather than the original minimax log D + log(1-D) objective. That keeps the executable Lean surface total over every supported scalar backend; logarithmic GAN objectives belong in a domain-restricted layer with explicit 0 < D(x) < 1 assumptions. The facts below are therefore the safe core: they make the generator/discriminator composition and the score-regression targets transparent, without smuggling analytic side conditions into a total spec.

@[simp]
theorem NN.MLTheory.Generative.Latent.GAN.generate_eq_generator {α : Type} [Context α] {latent obs : Spec.Shape} (model : Generative.GAN.Model α latent obs) (z : Spec.Tensor α latent) :

Generated samples are obtained by applying the generator to latent noise.

@[simp]

Fake scores are discriminator scores on generated samples.

@[simp]

LSGAN generator loss regresses fake scores toward the real target.

This is the generator side of Mao et al.'s least-squares objective: the generator does not receive the discriminator's "fake" target; it tries to move generated samples onto the real-score target.

@[simp]

LSGAN discriminator loss splits into real-score and fake-score regression terms.

The first term pushes D(x_real) toward 1; the second pushes D(G(z)) toward 0. Keeping this as a named theorem gives downstream examples and proof files a stable reference point for the game semantics, instead of requiring them to unfold the spec directly.

Connection to shared objective algebra and equilibrium checks #

noncomputable def NN.MLTheory.Generative.Latent.GAN.generatorObjectiveTerms {latent obs : Spec.Shape} [DecidableRel fun (x1 x2 : ) => x1 > x2] (model : Generative.GAN.Model latent obs) (z : Spec.Tensor latent) :

Package the LSGAN generator objective as a two-term objective with no regularizer.

Instances For
    noncomputable def NN.MLTheory.Generative.Latent.GAN.discriminatorObjectiveTerms {latent obs : Spec.Shape} [DecidableRel fun (x1 x2 : ) => x1 > x2] (model : Generative.GAN.Model latent obs) (xReal : Spec.Tensor obs) (z : Spec.Tensor latent) :

    Package the LSGAN discriminator objective as real-score plus fake-score regression.

    Instances For

      LSGAN generator loss is the shared two-term objective with zero regularizer.

      theorem NN.MLTheory.Generative.Latent.GAN.discriminatorLoss_eq_weightedThreeTerm {latent obs : Spec.Shape} [DecidableRel fun (x1 x2 : ) => x1 > x2] (model : Generative.GAN.Model latent obs) (xReal : Spec.Tensor obs) (z : Spec.Tensor latent) (weight : ) :

      LSGAN discriminator loss is the shared three-term objective with zero regularizer.

      If generated samples receive the real target score, the LSGAN generator loss is zero.

      If real samples receive target 1 and generated samples receive target 0, the LSGAN discriminator loss is zero.