Advanced tensor basics #
This file is the advanced “tensor basics” map for TorchLean. It covers the core operations people usually ask about once they understand typed shapes:
- constructing small tensors;
- indexing and slicing through the shape-indexed
Spec.Tensorrepresentation; - converting between
Spec.Tensor α sand array-backedTensorArray.Tensor α shape; - matrix-vector multiplication and elementwise arithmetic;
- manual reshaping when you want the row-major convention to be visible.
The bridge functions themselves live in NN/Spec/Core/TensorBridge.lean. This example imports that
library code and only supplies concrete demo values. In other words: this file is a tour, not a
second tensor implementation.
Why both representations exist #
We intentionally keep two representations because they serve different goals:
- The spec representation (
Spec.Tensor) is a nested function tree (Fin n → ...), which is great for writing total definitions and proving shape-correctness by structural recursion. - The array representation (
TensorArray.Tensor) is compact and practical for serialization and numeric kernels (e.g.matvecon flat buffers).
The conversion boundary is where you want to be explicit about layout and conventions. TorchLean uses a row-major convention for flattening/unflattening, so that the last axis varies fastest.
How to read this file #
The sections below are small "micro demos" you can copy into other scratch files:
- Convert an array-backed matrix to a
Spec.Tensorand index into it. - Do a simple linear
matvecinTensorArray, then convert the output back toSpec.Tensor. - Work with a small batched tensor and slice out samples.
Put your cursor on the #tensor_view and #tensor_stats_view commands to inspect the values in
the Lean infoview. The notes below also show the closest PyTorch spelling for each operation.
Important distinction:
- Ordinary tutorial definitions use plain
def; this is the code you should copy into normal TorchLean modules. - A few
meta def ...Viewdeclarations exist only for widgets. The infoview evaluator needs interpreter-visible bridge code, while normal compiled modules can use the plaindefs above them.
1) Array-backed matrix → spec tensor (and indexing) #
PyTorch analogue:
matrix = torch.tensor([[1., 2., 3.], [4., 5., 6.]])
first_row = matrix[0]
TorchLean keeps the shape in the type after the conversion: matrixSpec has shape [2,3], and
firstRowSpec has shape [3].
A 2×3 matrix stored in row-major order as a TensorArray.Tensor.
Instances For
Convert the matrix into the spec-level representation (Spec.Tensor).
Instances For
Extract the first row (shape [3]) by pattern-matching on the outer dimension.
Instances For
Convert the extracted row back into the array-backed representation.
Instances For
Widget lane for this section.
These meta declarations deliberately mirror the ordinary definitions above. They are not the
recommended programming style; they simply make the bridge computation executable for ProofWidgets.
Instances For
Instances For
2) A compact linear layer: matvec in TensorArray, then back to Spec.Tensor #
PyTorch analogue:
weight = torch.full((4, 3), 0.1)
x = torch.tensor([1., 2., 3.])
y = weight @ x
The array-backed TensorArray.matvec is the compact runtime-style operation; converting the result
back with toTensor makes it usable by spec/proof-facing code and by the tensor widgets.
A 4×3 weight matrix filled with a constant, stored as TensorArray.
Instances For
A length-3 input vector built as a Spec.Tensor (values 1,2,3).
Instances For
The same input, converted to TensorArray so we can call TensorArray.matvec.
Instances For
Compute weightMatrix @ inputAsArray using the array-backed kernel.
Instances For
Convert the output back to Spec.Tensor (useful if the rest of the pipeline is spec-level).
Instances For
Widget-only mirror for the bridge-backed output.
3) A small batch tensor and slicing samples #
PyTorch analogue:
batch = torch.arange(1., 13.).reshape(2, 2, 3)
first = batch[0]
second = batch[1]
In TorchLean, slicing a Spec.Tensor is ordinary dependent pattern matching: the outer
Spec.Tensor.dim exposes the Fin 2 -> Tensor ... function, and the index proof records that the
sample exists.
A 2×2×3 batch, encoded as a flat array with runtime shape [2,2,3].
Instances For
Convert the whole batch into the spec tensor.
Instances For
Slice out the first sample (shape [2,3]).
Instances For
Slice out the second sample (shape [2,3]).
Instances For
Convert the first sample back to TensorArray (e.g. for a kernel call).
Instances For
Convert the second sample back to TensorArray.
Instances For
Widget-only mirrors for the bridge-backed batch tensors.
Instances For
Instances For
Instances For
4) Reshaping example (manual, spec-side) #
PyTorch analogue:
flat = torch.tensor([1., 2., 3., 4., 5., 6.])
reshaped = flat.reshape(2, 3)
This section intentionally spells out the row-major index arithmetic. Most user code should prefer library reshape helpers when available, but this makes the layout convention completely explicit.
A flat length-6 vector in TensorArray.
Instances For
Convert it to Spec.Tensor (shape [6]).
Instances For
Reshape [6] into [2,3] by explicit index arithmetic (row-major).
Instances For
Convert the reshaped matrix back to TensorArray.
Instances For
Widget-only mirror for the manual reshape.
Instances For
5) Elementwise ops across representations #
PyTorch analogue:
a = torch.full((2, 2), 2.)
b = torch.full((2, 2), 3.)
c = a + b
Here one operand starts array-backed and the other starts spec-backed, so the explicit bridge call documents where the representation boundary is crossed.
A 2×2 array-backed tensor filled with twos.
Instances For
A 2×2 spec tensor filled with threes.
Instances For
Convert tensorB so both operands live in the array-backed representation.
Instances For
Elementwise addition in TensorArray.
Instances For
Convert the result back to Spec.Tensor.
Instances For
Widget-only mirror for the bridge-backed elementwise sum.
Instances For
6) "Forward + gradient-shape" worked example #
PyTorch analogue:
out = weight @ x
grad_like = 2 * out
This is not an autograd example; it is a shape-preserving tensor transformation that looks like a
gradient buffer. Autograd examples live under NN/Examples/Quickstart/AutogradBasics.lean.
A compact forward pass: weights @ input.
Instances For
The same forward pass, returning a spec tensor.
Instances For
A small "gradient" transformation: elementwise multiply by 2 (same shape, spec-side).
Instances For
Convert the example gradient back to TensorArray.
Instances For
Instances For
Widget-only mirror for the bridge-backed gradient-like tensor.