TorchLean API

NN.MLTheory.Generative.Latent.Objective

Shared latent-objective algebra #

We use this file to connect the latent generative models without forcing them into one artificial architecture. VAEs, VQ-VAEs, and GANs make different modeling choices, but their training objectives share a small amount of algebra:

Keeping that algebra in one place gives the model-specific files a common language. The VAE file can say "β-VAE is a weighted two-term objective"; the VQ-VAE file can say "VQ-VAE is a weighted three-term objective"; the GAN file can say "LSGAN is score-regression algebra over the same scalar loss vocabulary."

References:

A two-term latent objective: base + weight * regularizer.

  • base :

    Main data-fitting term, typically reconstruction or score regression.

  • regularizer :

    Latent regularizer, KL term, commitment loss, or critic penalty.

Instances For

    Evaluate a weighted two-term objective.

    Instances For

      A three-term latent objective: base + middle + weight * regularizer.

      • base :

        Main data-fitting term.

      • middle :

        Unweighted auxiliary term, such as the VQ-VAE codebook loss.

      • regularizer :

        Weighted regularizer, such as the VQ-VAE commitment loss.

      Instances For

        Evaluate a weighted three-term objective.

        Instances For
          @[simp]
          theorem NN.MLTheory.Generative.Latent.Objective.weightedTwoTerm_zero_regularizer (weight base : ) :
          weightedTwoTerm weight { base := base, regularizer := 0 } = base

          A weighted two-term objective collapses to its base term when the regularizer is zero.

          @[simp]

          At weight zero, a weighted two-term objective ignores the regularizer.

          theorem NN.MLTheory.Generative.Latent.Objective.weightedTwoTerm_mono_weight (terms : WeightedTwoTerm) {w₁ w₂ : } (hw : w₁ w₂) (hreg : 0 terms.regularizer) :
          weightedTwoTerm w₁ terms weightedTwoTerm w₂ terms

          If the regularizer is nonnegative, increasing the weight can only increase a two-term objective.

          This is the algebraic core behind β-VAE monotonicity in the KL weight.

          @[simp]
          theorem NN.MLTheory.Generative.Latent.Objective.weightedThreeTerm_zero_regularizer (weight base middle : ) :
          weightedThreeTerm weight { base := base, middle := middle, regularizer := 0 } = base + middle

          A weighted three-term objective collapses to base + middle when the regularizer is zero.

          @[simp]
          theorem NN.MLTheory.Generative.Latent.Objective.weightedThreeTerm_zero_middle_zero_regularizer (weight base : ) :
          weightedThreeTerm weight { base := base, middle := 0, regularizer := 0 } = base

          A weighted three-term objective collapses to base when both auxiliary terms are zero.

          @[simp]

          At weight zero, a weighted three-term objective keeps only its base and middle terms.

          theorem NN.MLTheory.Generative.Latent.Objective.weightedThreeTerm_mono_weight (terms : WeightedThreeTerm) {w₁ w₂ : } (hw : w₁ w₂) (hreg : 0 terms.regularizer) :

          If the weighted regularizer is nonnegative, increasing the weight can only increase a three-term objective.

          For VQ-VAE this is the commitment-weight monotonicity statement.