Tensor API Implementation #
User-facing tensor API for TorchLean.
This is the implementation leaf behind the public NN.Tensor umbrella. It is the ergonomic layer
that sits on top of the spec-first tensor semantics in NN.Spec.*. It does not introduce new math;
it provides constructors and syntax intended for examples, tests, small models, and quickstarts.
Typical responsibilities here:
- shape aliases such as
Shape.VecandShape.NCHW, - friendly constructors from lists and literals,
- small syntax helpers for examples/tests,
- and lightweight printing support for runtime-friendly dtypes.
Notation policy:
- keep compact literal constructors (
shape!,tensor!,tensorND!,tensorF!,tensor32!,fin!) in this module so they are easy to find and do not leak into unrelated imports, - keep more semantic glyphs scoped when possible (as in
NN.IRandNN.Floats.IEEEExec), - and prefer namespace-local aliases over reusing the same unscoped token in multiple layers.
Lean is statically typed, so the element type usually plays the role of “dtype”:
let x := NN.Tensor.tensor1d (α := Float) [0.1, 0.2]
let q := NN.Tensor.tensor1d (α := ℚ) [1, 2]
If you ever see Tensor Float _ in code: the _ is just “let Lean infer the shape from the RHS”.
In examples we prefer to omit the type annotation entirely unless it helps readability.
For convenience, this module also provides constructors that start from Float literals and cast
into common backends such as IEEE32Exec.
Printing: for Tensor ℝ we intentionally refuse to pretty-print, since that backend is meant for
proof-oriented mathematics rather than runtime IO. Use Float, IEEE32Exec, ℚ, etc. for values
you actually want to display.
Local alias for the canonical spec tensor type.
Instances For
Local alias for the canonical spec shape type.
Instances For
Scalar tensor constructor alias.
Instances For
Dimension tensor constructor alias.
Instances For
Shape cast alias.
Instances For
Scalar extraction #
Extract the scalar value from a scalar-shaped tensor.
PyTorch comparison: like t.item() for a 0-dim tensor.
Instances For
Dot-notation sugar for scalar tensors: t.item.
This is defined at the Spec.Tensor namespace so that it works with the canonical tensor type.
Instances For
Tensor.scalar x round-trips through Spec.Tensor.item.
Common shape aliases #
Scalar shape alias.
Instances For
Dimension shape constructor alias.
Instances For
Total number of scalar leaves in a shape.
Instances For
Vector shape n.
Instances For
Matrix shape rows × cols.
Instances For
1D signal shape C × L (channels-first). PyTorch analogy: Conv1d input without batch.
Instances For
Batched 1D signal shape N × C × L. PyTorch analogy: Conv1d input.
Instances For
Image shape C × H × W (channel-first).
Instances For
Image shape H × W × C (channel-last).
Instances For
Batched image shape N × C × H × W (PyTorch default for images).
Instances For
Batched image shape N × H × W × C (channel-last).
Instances For
3D volume shape C × D × H × W (channels-first). PyTorch analogy: Conv3d input without batch.
Instances For
Batched 3D volume shape N × C × D × H × W. PyTorch analogy: Conv3d input.
Instances For
User-facing alias for a single image shape.
This is the shape you usually think of as (C, H, W) in PyTorch.
Internally it is the same as Shape.CHW.
Instances For
User-facing alias for a batch of images.
This is the shape you usually think of as (N, C, H, W) in PyTorch.
Internally it is the same as Shape.NCHW.
Instances For
Conv kernel shape OutC × InC × kH × kW.
Instances For
1D conv kernel shape OutC × InC × kL.
Instances For
3D conv kernel shape OutC × InC × kD × kH × kW.
Instances For
Common tensor type aliases #
Vector tensor n (shape-indexed, like a 1-D PyTorch tensor of length n).
Instances For
Matrix tensor rows × cols (shape-indexed).
Instances For
Image tensor C × H × W (shape-indexed).
Instances For
Batched image tensor N × C × H × W (shape-indexed).
Instances For
Construction #
Convert a runtime list of dimensions like [2, 3, 4] into a nested Shape.
Why this exists:
- In the spec layer we usually carry the shape in the type (great for safety).
- In “API land” we often start from lists (CLI args, JSON, small examples, …).
shapeOfDims is the bridge between those representations.
Instances For
shapeOfDims round-trips through Spec.Shape.toList.
Number of scalar elements (“numel”) implied by a runtime dims list.
Instances For
Create a 1-D tensor from a Lean List.
Notes:
- The shape is computed from
xs.length, so the type remembers the length. - This is total and does not perform any runtime checks.
PyTorch analogy: torch.tensor(xs) producing a 1D tensor.
Instances For
One-hot vectors #
2-D tensor from nested lists, returning none for empty/ragged inputs.
This delegates to Spec.from_list_2d, which refuses:
- an empty outer list,
- any empty row,
- or rows with mismatched lengths.
PyTorch analogy: torch.tensor(xss) also refuses ragged inputs.
Instances For
2-D tensor from nested lists, with a human-readable error message on failure.
Instances For
Ragged-friendly 2D constructors #
2-D tensor from nested lists, padding/truncating each row to nCols.
This is the permissive sibling of tensor2d: it never fails, but it will silently pad with
default (and drop extra entries beyond nCols). This is useful when you intend ragged inputs
(e.g. batching variable-length sequences after padding).
PyTorch analogy: closer to pad_sequence(..., batch_first=True) followed by torch.tensor.
Instances For
2-D tensor from nested lists, padding to the maximum row length (0 if empty).
This is convenient when you just want a rectangular tensor without precomputing nCols.
Instances For
General N-D tensors from a flat list #
The “real” spec constructors are shape-indexed, i.e. you usually build tensors by recursion on a
Shape. In example and data-loading code, though, it’s common to have:
So we do a small amount of reshaping here.
Implementation note:
buildFromFlat_ofLenEqconsumes the flat list using a proof that lengths match.- The public APIs (
tensorNDandtensorND_ofLenEq) are the ones that establish that proof.
Build a shape-indexed tensor from a flat list, given a proof that the lengths match.
This is a helper for tensorND_ofLenEq/tensorND: users typically want those APIs rather than
recursing over Shape directly.
Instances For
N-D tensor from a runtime dims list and a flat xs, given a proof of matching length.
This is the “static / proof-carrying” version: if you can prove the length match, you avoid any runtime checks and you keep a precise shape in the type.
Instances For
N-D tensor from a runtime dims list and a flat xs, with a runtime length check.
This is the “dynamic / user-friendly” version: it fails with a descriptive message if the number
of provided scalars doesn’t match the implied numel.
Instances For
Fill/zeros/ones from runtime dims #
All-zeros tensor, from a runtime dims list.
Instances For
All-ones tensor, from a runtime dims list.
Instances For
Tactics #
tensorND_ofLenEq needs a proof that your flat list has the right length.
For the common “all dimensions and lists are literals/abbrevs” case, this tactic usually closes that goal automatically.
Usage:
tensorND_ofLenEq ... (by tensor_len)
tensorND! is the checked literal constructor for constants whose length proof should be solved
by tensor_len.
It expands to tensorND_ofLenEq ... (by tensor_len), so you usually don’t have to write the proof.
If the proof can’t be solved (e.g. truly dynamic dims), elaboration fails; use tensorND in that
case.
Typed variant of tensorND!.
This is the same constructor as tensorND! dims xs, but lets you explicitly specify the element
type when numeric literals would otherwise default to an undesired type.
Example:
def x : Tensor ℚ (shape![2, 2]) := tensorND! (ty := ℚ) [2, 2] [1, 2, 3, 4]
Implementation note: we avoid reserving a common identifier as a syntax keyword (Lean would then
treat it as a keyword in downstream files). Instead, we parse an ident and sanity-check it is
ty.
Instances For
shape! is a compact convenience macro for examples: build a Shape from a bracketed dimension list.
It expands through reducible shapeOfDims, so it stays definitionally aligned with tensorND!
while still unfolding for dependent proofs and shape typeclass search.
tensor! is the "nested brackets" constructor, similar to writing nested Python lists in PyTorch.
Examples:
-- shape (2, 3, 4)
def x :=
tensor! [
[ [0,1,2,3], [4,5,6,7], [8,9,10,11] ],
[ [12,13,14,15],[16,17,18,19],[20,21,22,23] ]
]
It is fully general over rank: the nesting depth determines the rank.
Internally, it computes the dims from list lengths and flattens in row-major order
(last dimension changes fastest), then calls tensorND!.
If you need runtime/ragged handling, use tensorND/tensorDynND instead.
Typed variant of tensor!.
This is useful when the leaf literals don’t uniquely determine the element type.
Example:
def x : Tensor ℚ (shape![2, 2]) :=
tensor! (ty := ℚ) [[1, 2], [3, 4]]
Small index helpers #
Writing ⟨0, by decide⟩ everywhere is noisy.
These macros are intended for examples/tests where the dimension is a literal or abbreviation, so
by decide can close the bounds proof. They are deliberately not a replacement for carrying a real
Fin n in library code.
Examples:
Tensor.vecGet v fin0!
Tensor.vecGet v fin1!
Tensor.vecGet v (fin! 5 3)
Convenience index ⟨1, by decide⟩ for example code where the bound proof is decidable.
Instances For
Convenience index ⟨i, by decide⟩ packaged as Fin n, for examples/tests with literal bounds.
Instances For
tensorF! is a convenience macro for constants: build from Float literals, then cast into your
chosen runtime scalar backend using cast : Float → α.
Example:
let w : Tensor α (.dim 3 (.dim 2 .scalar)) := tensorF! cast [3, 2] [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
This avoids noisy lists like [cast 0.1, cast 0.2, ...].
tensor32! is the tensor! macro specialized to the executable IEEE-754 binary32 backend.
It builds a Tensor Float _ from a nested bracket literal, then casts elementwise via
IEEE32Exec.ofFloat.
Example:
def w : Tensor TorchLean.Floats.IEEE32Exec (shape![2, 2]) :=
tensor32! [[0.1, 0.2], [0.3, 0.4]]
Dynamic tensor wrapper (shape not in the type) #
A tensor paired with its shape, where the shape is stored as data instead of being in the type.
This is useful when:
- you need to accept arbitrary tensors at runtime (CLI / file formats / interoperability),
- but you still want to carry a
Shapearound so downstream code can inspect it.
- s : Shape
Runtime shape tag carried alongside the tensor.
Shape-indexed tensor whose type is tied to
s.
Instances For
Common dtype helpers (from Float literals) #
1-D tensor from Float literals, cast into the executable IEEE-754 FP32 backend.
Instances For
2-D tensor from Float literals, cast into the executable IEEE-754 FP32 backend.
Instances For
Printing #
A compact “dtype name” class used by NN.Tensor.print.
This is deliberately lightweight: it’s only meant for friendly IO output in examples/tests.
- name : String
Instances
Display name for Float tensors in NN.Tensor.print.
Display name for rational tensors in NN.Tensor.print.
Display name for integer tensors in NN.Tensor.print.
Display name for proof-level FP32 rounding-model tensors in NN.Tensor.print.
Display name for executable IEEE-754 FP32 tensors in NN.Tensor.print.
Display name for proof-level real-valued tensors in NN.Tensor.print.
Display name for TorchLean complex scalars in NN.Tensor.print.
Default printing for element types that support ToString.
Printing is intentionally disabled for proof-level ℝ tensors.
Printing is intentionally disabled for the proof-only rounding model FP32.
Print a tensor with a dtype tag, or throw an IO.userError if the dtype refuses to print.