Tensor Algebra Proofs #
Backend-generic algebraic tensor lemmas.
This file provides the semiring-level dot-product API and supporting fold/sum lemmas used by
autograd and tape soundness proofs. The point of this file is to avoid baking ℝ, topology, or
calculus assumptions into algebra that only needs addition, multiplication, and finite sums.
NN.Proofs.Tensor.Basic imports this file and adds the ℝ-specialized spec-facing toolkit
(norms, shape facts, Frobenius identities, and convenience lemmas for analysis proofs). Keeping this
file generic lets correctness theorems reuse the same algebra over exact rationals, real numbers, or
other semiring-like scalar models.
PyTorch correspondence / citations #
- Elementwise product + sum (“flattened dot” for same-shape tensors):
(a * b).sum(). https://pytorch.org/docs/stable/generated/torch.sum.html - 1D dot product:
torch.dot(a, b)(for real tensors). https://pytorch.org/docs/stable/generated/torch.dot.html
Vector views #
View a 1D tensor of shape (n,) as a function Fin n → α.
Informally: this is the spec-level analogue of PyTorch indexing t[i] for a vector.
Instances For
Recursive same-shape dot product #
Dot product for tensors of the same shape: multiply elementwise and sum over all scalar leaves.
For shape (n,) this matches torch.dot (over a commutative semiring, i.e. ignoring complex
conjugation). More generally it matches the common PyTorch idiom (a * b).sum() for same-shape
tensors.
Instances For
List-fold helpers #
Distribute a fold of g1 x + g2 x into the sum of two folds.
This is a general “sum splits over addition” lemma used to prove bilinearity-like properties of
dot.
From executable folds to proof-friendly sums #
Rewrite the spec-style fold over List.finRange n into a proof-friendly Finset.univ.sum.
Many spec definitions use List.foldl (it is definitional and convenient for computation), while
proofs often prefer Finset.sum so they can use standard big-operator lemmas.
Dot-product algebra #
Scaling #
Symmetry of dot (commutativity) over a commutative semiring.
Additivity in the left argument: dot (a + b) c = dot a c + dot b c.
Additivity in the right argument: dot a (b + c) = dot a b + dot a c.
dot a 0 = 0 (where 0 is the all-zeros tensor via fill).
For vectors (n,), dot can be expressed as an explicit Finset.univ.sum.
This is the bridge between the recursive dot definition and “PyTorch-looking” summation formulas.
Matrix/vector adjointness: ⟪y, W x⟫ = ⟪y W, x⟫ #
Matrix-vector multiply coordinate expansion.
This is a spec-to-proof bridge: it turns the List.finRange fold in mat_vec_mul_spec into a
Finset.univ.sum formula, matching the textbook / PyTorch view of matvec as a dot product.
Vector-matrix multiply coordinate expansion.
This is the right-adjoint analogue of toVec_mat_vec_mul_spec.
Adjointness identity for matvec under dot.
Informally: ⟨dLdy, W dx⟩ = ⟨dLdy W, dx⟩, i.e. the transpose-adjoint relationship between
matrix-vector and vector-matrix multiplication.
In PyTorch this corresponds to the familiar identity (dLdyᵀ @ (W @ dx)) = ((dLdyᵀ @ W) @ dx),
written with explicit sums.