Tensor constructors (spec layer) #
These are small, total constructors for building Spec.Tensor values directly.
They are used heavily inside the spec layer (models/layers) and in proofs, where we want:
- straightforward definitional unfolding, and
- no dependence on
IOor dynamic shape checks.
If you want a PyTorch-style user experience for examples (dynamic dims, runtime errors, and
Float-literal casting), prefer NN/Tensor/API.lean instead.
Design choice (why these are "total"):
- In the spec layer we would rather make edge cases explicit than throw runtime exceptions.
- If something is shape-invalid, we want Lean to reject it at elaboration time.
- If something is index-invalid at runtime (e.g. array-backed constructor with wrong size), we
choose a predictable fallback (usually
Inhabited.default) and let verification code decide whether that situation is allowed.
Scalar constructor (explicit name).
PyTorch analogy: a 0-dim tensor holding one value.
Instances For
Vector constructor from a Fin n → α function.
PyTorch analogy: torch.tensor([...]) with shape (n,), but our input is a function, not a list.
Instances For
Generic vector creation that works with any type (handles n = 0).
Why this exists:
- For
n = 0, a functionFin 0 → αis fine (it has no inputs), but it can be awkward at call sites. This helper makes the intent explicit and keeps patterns uniform. - The
0case is definitionally a tensor with an empty outer dimension (so the function body is never evaluated).
Instances For
Build a length-n vector from a Fin n → scalar function.
This is a small wrapper that reads nicely at call sites where we already have scalar tensors.
Instances For
A singleton vector.
PyTorch analogy: x.unsqueeze(0) for a scalar x.
Instances For
Pad a tensor with n leading dimensions of size 1.
This is the tensor-level companion of Shape.padLeft. It is useful for broadcasting-style
normalization: if you need a tensor to have extra leading batch dimensions of size 1, this does
so without changing any underlying values.
PyTorch analogy: repeated unsqueeze(0) (or viewing a tensor as having extra leading singleton
dims).
Instances For
Build a vector tensor from an array.
We require an explicit proof that the target length matches the array size. This keeps the spec construction honest and makes mismatches obvious at call sites.
Instances For
Build a tensor with one leading dimension from an array of inner tensors.
This is the "tensor-of-tensors" analogue of Tensor.ofArray1D:
- input:
Array (Tensor α s)of lengthn - output:
Tensor α (.dim n s)
We require an explicit proof that the array size matches n so callers do not silently
drop/pad data.
Instances For
View a length-n vector as an n x 1 "column" matrix.
PyTorch analogy: v.unsqueeze(-1) for a 1D tensor v (or v.reshape(n, 1)).