TorchLean API

NN.Spec.Generative.Latent

Latent-variable generative helpers #

This module contains the shared spec-layer vocabulary for latent generative models:

The definitions are intentionally model-agnostic. A VAE, VQ-VAE, latent diffusion model, or normalizing-flow model can all reuse these primitives without committing to a particular backbone.

References:

Elementwise exponential, useful for log-variance parameterizations.

Instances For

    Elementwise 0.5 * x, written as a tensor helper to make VAE equations readable.

    Instances For
      def Generative.Latent.reparameterizeDiag {α : Type} [Context α] {latent : Spec.Shape} (mu logvar eps : Spec.Tensor α latent) :
      Spec.Tensor α latent

      Diagonal-Gaussian reparameterization:

      z = μ + exp(0.5 * logσ²) ⊙ ε.

      This is the spec-level form of the VAE reparameterization trick. The noise ε is explicit, so the function stays pure and deterministic; runtime examples can supply deterministic or random noise.

      Instances For
        def Generative.Latent.diagonalGaussianKlToStandard {α : Type} [Context α] {latent : Spec.Shape} (mu logvar : Spec.Tensor α latent) :
        α

        Mean KL term for a diagonal Gaussian posterior against a standard normal prior.

        For each latent coordinate:

        KL(N(μ, σ²) || N(0,1)) = 0.5 * (exp(logσ²) + μ² - 1 - logσ²).

        We return the mean across the latent shape, matching TorchLean's existing loss convention.

        Instances For
          structure Generative.Latent.Codebook (α : Type) (numCodes : ) (latent : Spec.Shape) [Context α] :

          A finite codebook for vector-quantized latent models.

          • embedding : Fin numCodesSpec.Tensor α latent

            Embedding vector for each code index.

          Instances For
            def Generative.Latent.quantizeAt {α : Type} [Context α] {numCodes : } {latent : Spec.Shape} (book : Codebook α numCodes latent) (idx : Fin numCodes) :
            Spec.Tensor α latent

            Quantize by an explicit code index.

            Nearest-neighbor lookup is usually how VQ-VAE chooses this index during execution. The spec keeps the index explicit so proofs and verifiers can reason about a fixed code assignment without depending on an argmin/tie-breaking policy.

            Instances For