Diffusion forward process: Gaussian law #
We use this file as the mathlib-backed anchor for diffusion probability theory.
It formalizes a standard fact used implicitly throughout diffusion models:
If Z is standard Gaussian in a finite-dimensional Euclidean space E, then
x_t := c0 • x0 + c1 • Z
is Gaussian for any fixed x0 : E and scalar coefficients c0,c1 : ℝ.
In the DDPM/VP setting, the usual coefficients are:
c0 = sqrt(ᾱ_t)c1 = sqrt(1-ᾱ_t)
At the spec layer (NN.Spec.Generative.Diffusion.ForwardProcess), we treat the noise ε as an
explicit tensor input. This file provides the probability-theory side: when that noise is sampled
from stdGaussian, the resulting distribution is Gaussian.
The result gives the exact law-level fact used by VP/DDPM forward processes: affine noising of a fixed data point by standard Gaussian noise produces another Gaussian probability measure. We keep the statement at this level because it is the reusable primitive needed by ELBO or SDE developments.
References:
- Ho, Jain, and Abbeel, "Denoising Diffusion Probabilistic Models", NeurIPS 2020.
- Song et al., "Score-Based Generative Modeling through Stochastic Differential Equations", ICLR 2021.
Forward noising measure in a finite-dimensional Euclidean space:
x ↦ c0 • x0 + c1 • x, where x ~ stdGaussian.
We define it as a composition of:
- a linear map (
x ↦ c1 • x), and - a translation (
y ↦ y + c0 • x0),
so that Gaussian-closure lemmas in mathlib apply directly.