TorchLean API

NN.Proofs.Analysis.InductiveProperties

Tensor-shape induction and lifting lemmas #

This file collects reusable proof patterns for reasoning about Tensor by structural induction on its Shape (i.e. the nested scalar/dim structure), plus a few higher-level lifting lemmas that are easiest to state once the Lipschitz/norm library is available.

Why this exists #

Many lemmas in TorchLean are naturally phrased as “for all shapes / for all dimensions …”. Rather than re-proving the same induction scaffolding (or writing deeply nested cases/induction blocks) throughout the repo, we keep a few canonical lemmas here.

These are especially useful when proving algebraic properties of *_spec tensor operations, or norm/metric bounds that are proved “componentwise” and then lifted to the whole tensor.

Why this is not under NN/Spec: NN/Spec should define the mathematical objects and operations. The induction principles below are theorem/proof conveniences about those objects, so they belong under NN/Proofs.

References #

theorem Proofs.tensor_induction_principle (P : {s : Spec.Shape} → Spec.Tensor sProp) (base : ∀ (x : ), P (Spec.Tensor.scalar x)) (step : ∀ {n : } {s : Spec.Shape} (f : Fin nSpec.Tensor s), (∀ (i : Fin n), P (f i))P (Spec.Tensor.dim f)) {s : Spec.Shape} (t : Spec.Tensor s) :
P t

Structural induction on tensors by their Shape.

Informally: to prove P t for all tensors t, it suffices to prove it for scalars, and to prove that it is preserved when we build a higher-dimensional tensor Tensor.dim f from its components.

theorem Proofs.binary_tensor_induction (P : {s : Spec.Shape} → Spec.Tensor sSpec.Tensor sProp) (base : ∀ (x y : ), P (Spec.Tensor.scalar x) (Spec.Tensor.scalar y)) (step : ∀ {n : } {s : Spec.Shape} (f g : Fin nSpec.Tensor s), (∀ (i : Fin n), P (f i) (g i))P (Spec.Tensor.dim f) (Spec.Tensor.dim g)) {s : Spec.Shape} (t₁ t₂ : Spec.Tensor s) :
P t₁ t₂

Structural induction for binary tensor predicates.

Informally: to prove P t₁ t₂ for all tensors of the same shape, it suffices to prove it for scalar pairs, and to prove it componentwise for Tensor.dim f/Tensor.dim g.

theorem Proofs.l2_norm_concatenation {n : } {s : Spec.Shape} (f : Fin nSpec.Tensor s) :
tensorL2Norm (Spec.Tensor.dim f) ^ 2 = List.foldl (fun (acc : ) (i : Fin n) => acc + tensorL2Norm (f i) ^ 2) 0 (List.finRange n)

Squared L2 norm of a concatenation is the sum of squared L2 norms.

Informally: Tensor.dim f is a “stack/concat along the outer dimension”. The Euclidean norm satisfies ‖concat_i f i‖₂² = ∑ i, ‖f i‖₂².

Component-wise bounds extend to full tensors. Key principle for lifting scalar bounds to tensor bounds.

theorem Proofs.relu_nonneg_inductive {s : Spec.Shape} (t : Spec.Tensor s) (indices : List ) :
match Spec.getSpec (Activation.reluSpec t) indices with | some x => x 0 | none => True

ReLU preserves non-negativity inductively over all dimensions.

PyTorch analogue: relu is defined pointwise as max(x, 0), so its outputs are always ≥ 0. https://pytorch.org/docs/stable/generated/torch.nn.functional.relu.html

theorem Proofs.sigmoid_bounds_inductive {s : Spec.Shape} (t : Spec.Tensor s) (indices : List ) :
match Spec.getSpec (Spec.Tensor.mapSpec (fun (x : ) => 1 / (1 + Real.exp (-x))) t) indices with | some y => 0 < y y < 1 | none => True

Sigmoid output bounds extend inductively. Shows 0 < σ(x) < 1 for all tensor components.

PyTorch analogue: torch.sigmoid maps reals to the open interval (0, 1) pointwise. https://pytorch.org/docs/stable/generated/torch.sigmoid.html

Matrix-vector multiplication dimension consistency. Proves output dimensions are correct regardless of input tensor structure.

Linear transformation preserves tensor structure inductively. Shows that linearity holds component-wise across all dimensions.

Compose a list of functions into a single function by folding left. This is the forward semantics of a sequential neural network.

Instances For
    theorem Proofs.nested_lipschitz_composition {s : Spec.Shape} (functions : List (Spec.Tensor sSpec.Tensor s)) (constants : List ) (h_len : functions.length = constants.length) (h_nonneg : ∀ (j : Fin constants.length), 0 constants.get j) (h_lipschitz : ∀ (i : Fin functions.length) (x y : Spec.Tensor s), tensorL2Dist (functions.get i x) (functions.get i y) constants.get (Fin.cast h_len i) * tensorL2Dist x y) (x y : Spec.Tensor s) :
    tensorL2Dist (composeFunctions functions x) (composeFunctions functions y) List.foldl (fun (x1 x2 : ) => x1 * x2) 1 constants * tensorL2Dist x y

    Nested function composition preserves Lipschitz constants inductively. This shows that a deep network is Lipschitz with constant equal to the product of its layers’ Lipschitz constants.