Tensor-shape induction and lifting lemmas #
This file collects reusable proof patterns for reasoning about Tensor by structural induction
on its Shape (i.e. the nested scalar/dim structure), plus a few higher-level lifting lemmas
that are easiest to state once the Lipschitz/norm library is available.
Why this exists #
Many lemmas in TorchLean are naturally phrased as “for all shapes / for all dimensions …”.
Rather than re-proving the same induction scaffolding (or writing deeply nested cases/induction
blocks) throughout the repo, we keep a few canonical lemmas here.
tensor_induction_principlefor predicatesP : Tensor ℝ s → Prop,binary_tensor_inductionfor predicatesP : Tensor ℝ s → Tensor ℝ s → Prop.
These are especially useful when proving algebraic properties of *_spec tensor operations, or
norm/metric bounds that are proved “componentwise” and then lifted to the whole tensor.
Why this is not under NN/Spec: NN/Spec should define the mathematical objects and operations.
The induction principles below are theorem/proof conveniences about those objects, so they belong
under NN/Proofs.
References #
- This is standard structural induction on an inductive family; no external paper is required.
The main nontrivial detail is simply that TorchLean encodes tensors as a tree indexed by
Shape, rather than (say) a flat array with a runtimeshape.
Structural induction on tensors by their Shape.
Informally: to prove P t for all tensors t, it suffices to prove it for scalars, and to prove
that it is preserved when we build a higher-dimensional tensor Tensor.dim f from its components.
Structural induction for binary tensor predicates.
Informally: to prove P t₁ t₂ for all tensors of the same shape, it suffices to prove it for
scalar pairs, and to prove it componentwise for Tensor.dim f/Tensor.dim g.
Squared L2 norm of a concatenation is the sum of squared L2 norms.
Informally: Tensor.dim f is a “stack/concat along the outer dimension”. The Euclidean norm
satisfies ‖concat_i f i‖₂² = ∑ i, ‖f i‖₂².
Component-wise bounds extend to full tensors. Key principle for lifting scalar bounds to tensor bounds.
ReLU preserves non-negativity inductively over all dimensions.
PyTorch analogue: relu is defined pointwise as max(x, 0), so its outputs are always ≥ 0.
https://pytorch.org/docs/stable/generated/torch.nn.functional.relu.html
Sigmoid output bounds extend inductively. Shows 0 < σ(x) < 1 for all tensor components.
PyTorch analogue: torch.sigmoid maps reals to the open interval (0, 1) pointwise.
https://pytorch.org/docs/stable/generated/torch.sigmoid.html
Matrix-vector multiplication dimension consistency. Proves output dimensions are correct regardless of input tensor structure.
Linear transformation preserves tensor structure inductively. Shows that linearity holds component-wise across all dimensions.
Compose a list of functions into a single function by folding left. This is the forward semantics of a sequential neural network.
Instances For
Nested function composition preserves Lipschitz constants inductively. This shows that a deep network is Lipschitz with constant equal to the product of its layers’ Lipschitz constants.