Core tensor datatype (Spec.Tensor) #
This file defines the foundational, shape-indexed tensor type used throughout TorchLean's spec layer:
Tensor α s
Why an inductive / functional representation? #
Instead of storing a flat array plus a shape, the spec tensor is a function from indices:
- a scalar is
α - an
n-dimensional tensor isFin n → ...
This has three practical benefits:
- Shape safety is enforced by the type.
- Proofs are natural: you reason by recursion on the
Shape/Tensorconstructors. - No layout commitment: the spec layer doesn’t bake in row-major vs column-major storage.
For long executable runs, repeated functional updates can create “closure chains”. Use
Tensor.materialize (documented below) to rebuild a tensor into an array-backed normal form.
Shape-indexed tensor datatype for the spec layer.
This is a functional representation:
This keeps proofs and shape-safe programming simple, and avoids committing to a concrete memory layout in the spec layer.
- scalar {α : Type} : α → Tensor α Shape.scalar
- dim {α : Type} {n : ℕ} {s : Shape} : (Fin n → Tensor α s) → Tensor α (Shape.dim n s)
Instances For
Runtime note: materialization #
Tensor α s is a functional representation (Fin n → ...). This is excellent for proofs, but
repeated updates (for example, many SGD steps) can build deep chains of closures
(fun i => ... (old (old (old i))) ...). Evaluating those chains later becomes progressively more
expensive.
Tensor.materialize rebuilds a tensor into an array-backed normal form (at every dimension), which
keeps long-running training loops from degrading.
It is extentionally the identity (same mathematical tensor), but it is much friendlier to the runtime evaluator.
Tensor.materialize preserves tensor values (it is extensionally the identity).
Default tensor value for any shape (uses Inhabited.default at scalars).
Instances For
Extract the scalar value from a scalar tensor.
Instances For
Inject a scalar into a scalar tensor.
Instances For
ofScalar (toScalar t) = t for scalar tensors.
AddCommMonoid on scalar tensors, transported from α via Tensor.scalarEquiv.
Equivalence between vectors-as-tensors and functions Fin n → α.
Instances For
AddCommMonoid on vector tensors, transported from Fin n → α via Tensor.dimScalarEquiv.
Cast a vector tensor along an equality of dimensions.
Instances For
Cast lemmas #
In dependently-typed proofs (especially graph/tape correctness proofs), the same cast may arise
with different proof terms. Since equality proofs are proof-irrelevant, we provide a few small
normalization lemmas for Tensor.cast_shape.
Indexing helpers #
Indexing design notes:
get_spectakes a runtime multi-index (List Nat) and returnsOption α. This is intentionally permissive and "frontend-friendly": you can use it for debugging, JSON import/export checks, and any place you want to try an index without committing to proofs.- For proof-driven code, you usually want
Fin nindices and theget/get2helpers.
PyTorch analogy:
get_spec t [i,j,k]is liket[i,j,k]but returnsnoneinstead of throwing.get t iis like slicing the first dimension:t[i]. (TorchLean also supports Lean’s indexing syntax:t[i]elaborates toSpec.get t i.)get2 A i jis likeA[i,j]. (AndA[(i, j)]elaborates toSpec.get2 A i jfor matrix-shaped scalar tensors.)
Enable Lean’s indexing syntax for spec tensors.
After this instance, you can write t[i] as notation for Spec.get t i.
We use the domain condition True because the index is already a Fin n, so it is always
in-bounds by construction.
Extract the i-th entry from a vector tensor.
Instances For
Enable Lean’s indexing syntax for matrix-shaped scalar tensors.
After this instance, you can write A[(i, j)] as notation for Spec.get2 A i j.
get_at_or_zero is a total variant of get_spec used in places where a default value is more
convenient than Option.
We keep both get_spec and get_at_or_zero because they serve different roles:
get_specis better when you want to distinguish "out of bounds" explicitly.get_at_or_zerois better for formulas that naturally treat out-of-bounds as padding.
tensor_cast is definitionally Tensor.cast_shape (a uniform cast normal form).
Replicate a scalar tensor to any shape.
Instances For
Slicing helpers.
We keep slice_spec as a focused "first-axis select" operation since it shows up all over the
place in spec definitions.
PyTorch analogy: slice_spec t i is t[i].
collect_at_index_spec is a "transpose-like" helper that pulls a fixed position out of every
batch entry.
This is a small but surprisingly useful building block in attention-like code and dataset
manipulations, where you frequently want to reorganize (batch, n, ...) into (n, batch, ...)
without committing to a concrete memory layout.
Collect the j-th element from each batch entry, producing a tensor with batch dimension moved to
the end.
This is a small "transpose-like" helper used in attention-like code and dataset reshaping.