TensorArray: a simple array-backed tensor representation #
Spec.Tensor is the canonical, shape-indexed tensor type for the spec layer. It is great for
proofs and pure definitions, but it is not always the most convenient representation for:
- IO (CSV/JSON/NPY),
- interop with external tooling,
- or performance-sensitive evaluation where you want explicit arrays.
TensorArray.Tensor is a lightweight alternative:
- the shape is a runtime
List Nat, - the data is a flat
Array α, - and a proof field (
shape_valid) records that the array length matches the shape product.
The bridge back to Spec.Tensor lives in NN/Spec/Core/TensorBridge.lean.
Why this representation exists:
- Most "paper definitions" are easiest with
Spec.Tensorbecause shape correctness is enforced by types and proofs follow the inductive structure. - Most integration tasks (file IO, external tools, or simple numerics) are easier with a flat array plus a runtime shape.
- Having both makes the trust boundaries explicit: array-backed tensors are convenient, while the shape-indexed tensors are the place we do most proofs.
A tensor is represented as a flat array of elements and a shape (as a list of dimensions). The shape_valid proof ensures the array size matches the product of the shape dimensions.
- data : Array α
Flat row-major data buffer.
Proof that the buffer length matches the product of the runtime dimensions.
Instances For
Product of dimensions for a runtime shape list.
This is the runtime analogue of Spec.Shape.size for Spec.Shape.
We keep it as a def (not just a local let) because it appears everywhere
shape_valid is constructed or rewritten.
Instances For
Build a tensor from an array when you already have a size proof.
Design choice:
Tensorstores the shape at the type level (Tensor α shape), so callers must provideh : arr.size = shapeProd shape.- This makes "I reshaped / reinterpreted the data" a conscious action with an explicit proof.
Instances For
Helper lemma: factoring a left-multiplication out of the foldl product.
This is used to prove shapeProd_cons and similar "shape product algebra" facts.
Length of List.zipWith is the minimum of the input lengths.
We keep these small list lemmas local to this file because they are only needed to justify
shape_valid proofs for array operations like zipWith.
Array.toList preserves size as List.length.
flatIndexAux returns an index that is bounded by the "mixed-radix" size implied by the
remaining shape.
Intuition: starting with accumulator acc, the recursion computes something of the form
acc * shapeProd shape + tail, where tail < shapeProd shape.
Map a function over all elements (shape preserved).
The only subtlety is the shape_valid proof: mapping doesn't change array length.
Instances For
Elementwise binary operation (shape preserved).
We require both tensors to have the same shape at the type level, so shape mismatches are
unrepresentable here.
Instances For
Reshape a tensor to a new shape with the same number of elements.
This is "view"-style: it reuses the same underlying data array.
The proof h is the only thing that changes.
Instances For
ReLU activation (elementwise max with zero).
This is written in the simplest "executable" style: a branch on x > 0.
For interval/real semantics, use Spec layer ops; this module is for array-backed computations.
Instances For
Matrix-vector multiplication: (m x n) matrix times (n) vector gives (m) vector.
This is a direct reference implementation intended for small sizes and clarity. If you need performance, you generally want the runtime/TorchLean path instead.