Variational autoencoder (VAE) spec #
This file gives TorchLean a small, backbone-independent VAE interface:
- an encoder maps an observation
xto diagonal-Gaussian parameters(μ, logσ²); - a decoder maps a latent sample
zback to observation space; and - the loss combines reconstruction MSE with the diagonal-Gaussian KL term.
The design mirrors the original VAE formulation of Kingma and Welling (2014), while staying compatible with TorchLean's deterministic spec layer by making the reparameterization noise explicit.
References:
- Kingma and Welling (2014), "Auto-Encoding Variational Bayes".
- Rezende, Mohamed, and Wierstra (2014), "Stochastic Backpropagation and Approximate Inference".
Diagonal-Gaussian encoder q_φ(z|x), returning (μ, logσ²).
- mean : Spec.Tensor α obs → Spec.Tensor α latent
Posterior mean.
- logvar : Spec.Tensor α obs → Spec.Tensor α latent
Posterior log-variance.
Instances For
Decoder/generator p_θ(x|z) represented by its reconstruction mean.
- forward : Spec.Tensor α latent → Spec.Tensor α obs
Decode a latent sample into observation space.
Instances For
Encode once and return (μ, logσ²).
Instances For
Sample the latent using explicit noise.
Instances For
Full VAE forward pass: encode, reparameterize with explicit noise, then decode.
Instances For
Reconstruction term, using mean-squared error in observation space.
Instances For
KL term KL(q_φ(z|x) || N(0,I)), averaged across the latent shape.
Instances For
β-VAE objective: reconstruction loss plus β times the diagonal-Gaussian KL term.
Instances For
Forward expansion lemma used by examples and downstream proof files.
The VAE objective is exactly reconstruction plus a weighted KL term.